Journal Rankings: positioning the field of educational research and educational academics (original) (raw)

Australian Education Journals: Quantitative and Qualitative Indicators

Australian Academic & Research Libraries, 2009

This paper reports on a study which applied citation-based measures to Australian education journals. Citations data were drawn from two sources, Web of Science and Scopus, and these data were used to calculate each journal's impact factor, h-index, and diffusion factor. The rankings resulting from these analyses were compared with draft rankings assigned to the journals for Excellence for Research in Australia (ERA). Scopus emerged as the citation source most advantageous to these journals and some consistency across the citation-based measures was found. Introduction and background Recent initiatives in the way research is funded in Australian higher education, firstly the Research Quality Framework (RQF) and now Excellence in Research for Australia (ERA), have prompted many disciplinary groups in the sector to focus on journal ranking and the use of citations as a measure of quality and impact, respectively. Together these comprise the cornerstone of ERA's method to assess research outputs in the form of journal articles. This paper reports on the findings of a study which, like several earlier studies, has examined a selection of Australian journals to determine their impact. 1-4 It differs from previous research in that ERA's draft list of ranked journals is available to compare tier ranking with citation-based metrics for those journals included in the study. Furthermore, the citation data used to calculate impact measures were drawn from two sources, Web of Science and Scopus, enabling additional comparisons to be drawn. In this study, Australian journals in the discipline of education are examined, extending the authors' previous investigations of social science and humanities journals. 5-7 Two papers released in December 2008, ERA Indicator Principles and ERA Indicator Descriptions, provided the higher education sector with a general outline of the indicators to be applied in the ERA process (commencing in 2009). 8, 9 Assessment of journal articles will be based on a list of ranked journals and citation data, the latter being drawn from 'the most appropriate citation data supplier for each discipline'. 9 These indicators, among others, will be tested in the first ERA round for the discipline cluster Physical, Chemical and Earth Sciences (PCE). However, citation data is not being used as an indicator in the second cluster being assessed, Humanities and Creative Arts (HCA), due the lack of available, reliable citation data in these disciplines. Instead, peer review will be applied to 20% of the research output reported. 10 At the time of writing no information is available regarding the specific indicators to be applied in the cluster Social, Behavioural and Economic Sciences (SBE), in which the education discipline is located. In the opinion of the current authors the likelihood that citation data will be used in the SBE cluster is high, for two reasons. Firstly, the cluster includes a number of disciplines that are relatively well

From journal rankings to making sense of the world

Journal ranking is yet another form of discriminatory practice in the higher education sector. I outline here why journal ranking should be considered a small but significant part of the hegemonic structures of inequality in the academic labor process. To further the debate that Adler and Harzing set out, I explain why the rankings should be considered part and parcel of a broader game of White masculine domination that excludes research that matters, research that helps us understand the world of work and contribute to meaningful improvements for individuals and organizations.

Should Journal Rankings Matter? Assigning “Prestige and Quality” in the Neoliberal Academy

Affilia, 2020

Why do we publish and who does this knowledge benefit? Our interest in these questions was reignited with the recent publication by Hodge et al. (2019) who reported findings on their survey of social work faculty who teach in programs that offer a PhD. Respondents were asked to rank order a list of social work journals according to their sense of each journal's quality and to assess their degree of familiarity with those journals. Using these reports, the authors created two journal rankings: quality and prestige. The latter was derived by multiplying a journal's composite quality score by its familiarity score. These rankings were offered as an alternative to commonly used impact factor scores which have been long critiqued as limited measures affected by field-dependent dynamics and vulnerable to "journal impact factor engineering" (Reedijk & Moed, 2008). Considering the journal's unique mission to create a space for feminist scholarship in social work, an aim which continues to be a marginalizing endeavor in the field (Barretti, 2011), we were surprised to see Affilia: Journal of Women and Social Work appear in the rankings at all. We were unsurprised, however, that it did not make it to the top tier. As Hodge and colleagues (2019) remind us, "the venues in which scholarship is published can have a significant, determinative effect on decisions regarding tenure, promotion, funding, merit increases, and professional visibility" (p. 1). The hierarchy of ranking has material consequences also for journals, including Affilia. Indeed, Affilia's perceived value influences the number and types of submissions, who is willing to review those submissions, as well as how many individuals or institutions are willing to pay to access them. Journal rankings do matter, whatever our views of them. But should they? Our aim here is not to evaluate the article by Hodge and colleagues or critique the methods they employed. In this editorial, we consider, instead, the politics of assessing "prestige and quality" in the context of persistent systemic inequities in the neoliberal university and global inequities in academic publishing (Chatterjee & Maira, 2014). What do the terms "quality" and "prestige" mean within the context of the U.S. academic industrial complex, a system where western-centric, positivist, universalist epistemologies continue to dominate knowledge production and dissemination? What forms of epistemic injustices accompany the construction of "quality" and "prestige"? How do we understand such terms within institutions of higher education where lack of diversity among

Ranking academics: Toward a Critical Politics of Academic Rankings

Critical Policy Studies, 2019

There is a need in academic rankings research for a more critical and political analysis beyond the register of normative global governance studies and the pervasive positivism of new public management that dominates the literature of social policy in the area of higher education and research. Given that academic rankings are powerful topological mechanisms of social transformation, critical theorists have a responsibility to engage with this extant research and to establish a politically sensitive agenda of relevant critical analysis. Thus, this article identifies three uncritical and pervasive assumptions that dominate academic rankings research, and which preclude a properly critical, and thus political, understanding of the ranking phenomenon. The powerful imbrication of these assumptions in rankings research will then be demonstrated by a review of the extant literature broken down into three broad categories of recent research (micro-methodology, sociocultural criticism, potentially critical). Building on points of departure in the third category that are promising for a critical agenda in future analyses of rankings, the piece concludes by suggesting three specific and undertreated aspects of academic rankings promising for future critical analysis. These aspects concern the roles of social apparatus, political arkhê, and historical dialectic.

Global science, national research, and the question of university rankings

Palgrave Communications

Science has always operated in a competitive environment, but the globalisation of knowledge and the rising popularity and use of global rankings have elevated this competition to a new level. The quality, performance and productivity of higher education and university-based research have become a national differentiator in the global knowledge economy. Global rankings essentially measure levels of wealth and investment in higher education, and they reflect the realisation that national pre-eminence is no longer sufficient. These developments also correspond with increased public scrutiny and calls for greater transparency, underpinned by growing necessity to demonstrate value, impact and benefit. Despite ongoing criticism of methodologies, and scepticism about their overall role, rankings are informing and influencing policy-making, academic behaviour, stakeholder opinions-and our collective understanding of science. This article examines the interrelationship and tensions between the national and the global in the context of the influences between higher education and global university rankings. It starts with a discussion of the globalisation of knowledge and the rise of rankings. It then moves on to consider rankings in the context of wider discourse relating to quality and measuring scholarly activity, both within academia and by governments. The next section examines the relationship and tensions between research assessment and rankings, in policy and practice. It concludes by discussing the broader implications for higher education and university-based research.

Shaping the field: the role of academic journal editors in the construction of education as a field

British Journal of Sociology of Education, 26, 5, 2005

In a previous BJSE article (Nixon and Wellington, 2005) we examined current trends in book publishing and how these have influenced and will influence the construction of the field of educational studies. (The latter study was a follow-up to an earlier study reported in Nixon, 1999.) This article focuses on journals and their editors and, to a lesser extent, the role that the peer review process plays in shaping the field of educational studies. We use (critically rather than deferentially) notions drawn from the work of Bourdieu (1996) – the ‘field of power’, defining boundaries, systems of dispositions, right of entry, and the ‘illusio’ – to consider and conceptualise data from interviews with 12 journal editors. Our own position in writing this article is as academic practitioners involved in reading, peer reviewing and editing academic journals within the field of educational studies.

‘The Production of Scholarly Knowledge in the Global Market Arena: University Ranking Systems, Prestige and Power’

The relationships between disciplines and the institutions within which they are situated is a fertile area for researching the shaping of sociological knowledge. Applying theoretical insights from the sociology of knowledge, this paper draws on an empirical study of research publications in the sociology of health and medicine to show which institutions in the Australian context are most likely to use sociological theory. When the institutions are positioned within the global university ranking system, an inverse association between sociological theory and the relative wealth and prestige of the originating institution becomes evident. Some of the implications of this finding are discussed with reference to the on-going viability of disciplines.

Why the nature of educational research should remain contested: A statement from the new editors of the British Educational Research Journal

British Educational Research Journal, 2018

We are delighted to have been invited to edit the journal by the British Educational Research Association (BERA). As we begin our custodianship, we want to engage with a set of questions that emerges from the state of contemporary education: What counts as educational research? What, for that matter, counts as education? Is there a 'gold standard' of educational research? Who is best placed or best qualified to carry out educational research? Who are the readers of our journal and how do they read it? How can a journal focusing on the four nations of Britain also serve an international academic community? The journal that we are inheriting from the previous editorial team is in excellent shape, which puts us in a strong position to address the transforming international context of educational research, policy and practice. Our vision is that BERJ retains its hallmark of excellence and continues to showcase the best British and international research in education. At the same time, we hope that the journal will set the agenda for exploring the boundaries of educational research, how educational research relates to policy and practice, and who educational research is produced by and for. The editors of an academic journal in any field right now must also ask an additional set of questions: What is a journal and what is its role in the present academic and professional climate? What are the implications for a journal, and the field it serves, of open access, changes in forms of electronic publishing, and interaction with social media? Who reads an academic journal and how do readers find and access a particular article? The days when anyone 'reads' a single paper issue of a journal from cover to cover are surely numbered. Do we even need academic journals any more? We acknowledge that the field and disciplines of educational research, and the publishing practices that have traditionally served them, are in question. This presents challenges and opportunities for the journal. We believe that BERJ will maintain its brand of academic excellence throughout these transforming circumstances by being self-reflexive, and by encouraging submissions that question the nature of research and of the disciplines that contribute to our understanding of education. Our hope is