The Impact of Research Assessment on the Profession and the Discipline of Political Science

At the 2016 SISP annual meeting, in Milano, we held a roundtable on the topic of research assessment in comparative perspective. Leading European scholars, both expert of evaluation and with significant experience at the helm of their respective national associations, took part in this roundtable: Prof. Matthew Flinders, Chair of the Executive Committee of the Political Studies Association (PSA); Prof. Rudy Andeweg, Chair of the Executive Committee of the European Consortium for Political Research (ECPR) and former Chair of the Dutch political science association; Prof. Catherine Paradeise, Professor emerita UPEM-LISIS (Laboratoire Interdisciplinaire Sciences Innovations Sociétés) and expert of academic evaluation; and Prof. Daniele Checchi, member of the Group Expert Evaluator in Economics (GEV-13) for the Research Quality Evaluation (VQR) 2004-2010. They were asked to share their experience with research assessment in their countries and to contribute their points of view to the discussion of the impact of research assessment in the social sciences, and particularly in political science. As organizer of the roundtable, I asked them to comment on and report their experience on the following aspects of research assessment:

  1. Have research assessment exercises in your country been met with enthusiasm and collaboration or with suspicion and resistance? What were the arguments pro and against?
  2. Which aspects have been pinpointed as being particularly problematic: the use of quantitative indicators (such as single product/journal impact factor); the pressure towards internationalization (often coinciding with ‘publishing in English’); debatable rankings of publishers’ prestige, etc.?
  3. What impact have these exercises had on the academic profession in political science? Have they prompted a higher rate of international submissions? Have they improved overall production rates? Have they encouraged publications of journal articles as opposed to monographs?
  4. What impact have these exercises had on the academic profession in political science? Have they prompted a higher rate of international submissions? Have they improved overall production rates? Have they encouraged publications of journal articles as opposed to monographs?
  5. Which aspects have been reformed/improved from one round to the next? Have the problems encountered in early rounds been amended in successive rounds?

The context for such an analysis is the fact that several European countries – UK, Sweden, Spain, Norway, Netherlands, Italy, Ireland, Hungary, Germany, France, Finland and Belgium – conduct by now periodic research assessment exercises. In particular, the Italian university system has already conducted three such rounds of assessment, according to particularly formalized procedures. We therefore thought that the time had come for a collective reflection on the pros and cons of such exercises and on the potential repercussions that they may have on the academic profession and the discipline of political science, as the very idea of assessing scientific production was met with considerable resistance and skepticism in many countries, Italy included. Some are opposed in principle to the idea of assessing scholarly products as if they were just any other product, implicitly rejecting both the logic of accountability (how are public or private/public funds spent) and the logic of monitoring. Others have misgivings about the specific way in which this assessment is carried out, and in particular about the construction of (increasingly) indicators-driven excellence rankings of departments, scholars and disciplines. A handful doubt that such assessment has any consequence at all (while it does have a small financial impact on the distribution of funds from the Ministry of Education, which departments can use to expand their teaching staff), while many object to the mostly undesired consequences that these exercises have on the development of the academic profession. The controversy is particularly intense in the social sciences and humanities, therefore also in political science, areas in which the so-called bibliometric indicators are more difficult to apply and assessment must therefore remain mostly qualitative.
The articles that follow are the much elaborated and refined texts of the interventions presented at the roundtable. The article by Flinders sketches the long history of British research assessment and warns against the subtle and paradoxical effects of the potential excesses of a productivity-driven assessment of academic activity. The articles by Andeweg and Paradeise show how other European countries tailor research assessment to the specific needs and particularities of the national organization of academic and research institutions. And finally, the article by Checchi provides abundant data on the Italian experience with the VQR (Valutazione della Qualità della Ricerca), allowing the readers to draw their own conclusions. In thanking again the participants for their generous contribution of time, knowledge and ideas, I would like to draw attention to a few common themes that emerge from these articles:

  1. The managerial logic that inspired these assessment exercises, particularly in the UK but also in other European countries, was mostly implemented over the heads of the academic profession as a way of curtailing what were perceived to be outdated privileges; the academic profession has been mostly sidelined in the elaboration and implementation of these procedures and has shown either skepticism and resistance or indifference to the idea of assessing scholarly production; this is particularly shocking in the case of political science, as political scientists have been marginalized in one of their putative fields of expertise – the politics of academic policy-making;
  2. Assessment of research products tends to increasingly rely on quantitative indicators and rankings of journals and publishing houses – an aspect which is contested also, e.g., in France and in Italy – in a mimetic attempt to emulate the hard sciences; such indicators increasingly acquire a life of their own, being often used as summary indicators for the scientific worth of Departments and scholars;
  3. Research assessment exercises introduce a number of potential distortive elements: a) peer-reviewed journal articles tend to be assessed better than edited volumes and monographs, regardless of their real value; b) joint works tend to be preferred over single-authored works, inducing the artificial inflation of multi-author products; c) all other things being equal, works in English attract greater readership and gain higher impact factors than works in national languages, which affects particularly academic communities which do not use English as their first language; d) research assessment rankings of Departments and scholars induce ‘gaming strategies’ that create further distortions (strategic hiring, discouragement of teaching, creation of two-tier academic milieus) that do not necessarily secure better scholarship; d) the advantages of creating a culture of assessment, peer-review and accountability may be more than offset by the costs, in terms of time and money, of the assessment exercise itself;
  4. The impact of research assessment on departmental funding is highly uneven across Europe – small but meaningful in Italy and France, inexistent in the Netherlands (where cuts and increases in funding follow a different logic) and indirect in the UK (through the effect that rankings have on the attractiveness of departments for scholars and students is remarkable) – while the impact on the nature and pressures of being in academia are momentous (described in one of the contributions as ‘going MAD’); the relevance of political science for society may have paradoxically suffered from this attempt to make it more socially accountable, as the energy and attention of scholars has been in part diverted from the pursuit of interesting, cross-disciplinary research questions to the production of formally more polished and marketable works;
  5. Subsequent reforms of the exercise have, in certain cases, tried to correct some of the perceived distortions by adding, e.g., teaching assessment or by correcting the number and selection of works to be assessed, but these corrections run the risk of introducing new distortions of their own.

In conclusion, while research assessment throughout Europe addresses the issues of transparency, comparability and accountability in the academic world, it also carries challenges of its own that affect particularly the social sciences and the humanities. Italian political science is neither alone nor unique in experiencing some difficulties in having its production being assessed through such methods, yet it would be difficult to argue that the assessment should cease and that the academic world should deprive itself of this instrument of self-evaluation and accountability towards society. The one overarching lesson that we may perhaps draw from this comparative analysis is that political scientists need to pay greater attention to academic politics and policy, and should attempt to play a more proactive role in defining the standards and goals of academia.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Basic HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS

%d bloggers like this: