The Tragedies of Political Science: The Politics of Research Assessment in the United Kingdom

The Tragedy of Political Science is the title of a 1984 book by David Ricci that made a bold argument concerning the evolution of the discipline. Ricci’s thesis, put simply, suggested that as political science had become more ‘professionalised’ throughout the twentieth century so it had also become less relevant, more verbose, less engaged, more impenetrable, increasingly distant from practitioners of politics and the public. The discipline had simply not lived up to the high hopes of C. Wright Mills for the social sciences – as set out in The Sociological Imagination (1959) – but had, if anything, become ensnared in a trap of its own making. In 1967 the Caucus for a New Political Science (CNPS) was created in the United States in order to encourage social engagement and activism amongst political scientists in direct rejection of the American Political Science Association’s (APSA) commitment to political neutrality and refusal to engage in major social debates. The ‘tragedy’ as both David Ricci and the CNPS argued was that at a historical point when American society desperately needed the evidence and insights that political science could deliver the discipline apparently either did not want to engage or had little to say. Political science – to paraphrase C. Wright Mills – had failed to deliver on its early promise.

To offer this disciplinary narrative is to offer little that is novel or new. The flaying of political science has emerged into a popular intellectual pastime in recent years and there are clear exceptions to this broad account in the form of individual scholars or sub-disciplines that have retained a clear social connection. Engaged scholars, however, arguably became very much the exception rather than the rule in a profession that incentivised sub-disciplinary balkanisation, methodological hyper-specialism, theoretical fetishism and the development of esoteric discourses. This is, of course, an account of American political science that came to a head with the emergence of the Perestroika Movement in 2000 with its demands for greater methodological pluralism (within the discipline) and greater social engagement (beyond the discipline). While political science beyond the United States was never quite so seduced by the promises of rat-choice, quantitative, large-n, mathematical methodologies to fuel a ‘raucous rebellion’, the issue of whether and how the discipline should be required to demonstrate its non-academic relevance, social impact or public value has become a global challenge for the discipline. This, in turn, has spawned a growing pool of scholarship on the structural and contextual factors underlying political science’s apparent ‘relevance gap’ and, from this, how to ‘make political science relevant’ (Gilberto Capano and Luca Verzichelli (2016; see also 2010).

The focus of this article is more specific, provocative and future focused. It concerns an analysis of the impact of arguably the most explicit and potentially far-reaching externally-imposed research audit process in the world – the United Kingdom’s Research Excellence Framework (REF). As such this article reviews the REF assessment process before then exploring the impact and unintended consequences of this incredibly dominant and demanding ‘meta-governance’ framework for British higher education. It then locates the ‘politics of’ research assessment within a far broader and innovative account of the changing nature of scholarship in the United Kingdom. The core argument is that over thirty years since the publication of Ricci’s Tragedy of Political Science it is now more appropriate to survey the tragedies of political science in the sense of an apparent failure by the discipline to adopt a strategically selective and politically astute approach to navigating a changing socio-political context. Political science, put simply, frequently appears very poor at utilising the insights of its own discipline. The tragedies of political science therefore exposes the discipline’s own lack of political guile.

Could it be that this analysis makes the classic mistake of over-generalising from a single case study? What relevance does the REF have for scholars based beyond the shores of an island of less than a quarter-of-a-million square kilometres off the coast if Northern Europe? The answer is simple. As already mentioned, concern regarding the social benefit of publicly funded scholarship has moved up the political agenda in recent years and shows no sign of abating (quite the opposite). Add to this the fact that the UK has for some decades been a world-leader in terms of imposing market-based managerialist reforms in higher education that often have a subsequent ripple-effects beyond its shores and the relevance of this article for debates about scholarly relevance becomes clear. REF-like – or what might more accurately be described as REF-lite – procedures have or will be implemented in a large number of countries during the next decade and to be forewarned is to be forearmed. It is for this reason that this article is divided into three main parts.

Part I provides a descriptive account of the introduction and evolution of research assessment frameworks in the United Kingdom. Part II then dissects this chronological account to expose the unintended consequences and intellectual pathologies of this process. The argument is not that research assessment frameworks are a ‘bad thing’ of that they do not deliver positive impacts but it is to emphasise the manner in which new frameworks have to be very carefully calibrated in order to avoid negative over-steers and short-term gaming of the system. The final section – Part III – argues that any understanding of the impact of research assessment frameworks has to be located within an account of the wider context of higher education and the rapidly shifting sands of scholarship. The main concluding argument is therefore not about research assessment per se but about how research assessment is contributing to the break-up of academe and the splintering of disciplines along diverging pathways. The tragedies of political sciences are therefore converging to create the unravelling or unbundling of both the discipline and the nature of scholarship. It is this innovative macro-political argument that forms the main contribution of this article.

1. The Meta-Governance of Research

Although a far simpler and more accurate title for this section might be ‘Tragedy and Farce’, the main argument is simply that it is impossible to separate the introduction of externally imposed assessments of research by governments (either directly or indirectly) from broader debates concerning the distribution of political power in a polity. Therefore, although APSA may well have been at fault for failing to position political science more strategically or to ensure that evidence of clear social benefit and relevance for the discipline was not always conveniently at hand it is also true that large sections of the right-wing political elite in the United States treated universities, in general, and political science departments, in particular, with a mixture of disdain and distrust. The attempts between 2009 and 2014 to block or restrict federal funding to political science in the United States may therefore have been couched in terms of economic prioritisation in times of austerity but they actually veiled deeper ideological views about the role and independence of scholars and universities. In many ways the decision by the Conservative Government to introduce a new Research Selectivity Exercise in 1986 was a similarly charged exercise in pressure politics. Mrs Thatcher had become Prime Minister in 1979 on the basis of a need for public sector control, discipline and cutbacks. The belief in the capacity of the market over the state led to what David Marquand (2014) termed ‘the decline of the public’ in the sense of an assault on those basic anchor institutions within society that were designed and intended to promote collective values over individualised notions of society. The politics of ‘the public’ was rapidly eviscerated by a new politics of ‘the private’ that not only limited the powers of collective-bargaining institutions such as the trade unions but that also imposed neo-liberal values across the public sector in the guise of ‘New Public Management’ (NPM).

NPM was a broad set of managerial methods that all shared a simple faith in the capacity of the market to drive-up performance, increase efficiency and expose shirking. The promise was ‘a government that worked better and cost less’ but – as the prize-winning scholarship of Christopher Hood and Ruth Dixon (2015) has revealed – rarely lived up to its rhetorical promises. A neo-liberal philosophy of management that was supposed to banish bureaucracy and increase dynamism actually gave birth to new forms of ever more elaborate and organisationally suffocating rules, regulations and red tape. The universities were, however, more of a challenge to the Conservative governments of the 1980s due to their historically embedded independence from direct government control. But Mrs Thatcher was very much a conviction politician. Her time at the University of Oxford has convinced her that universities were complacent institutions that were overly protected from market forces. The public deserved better – in terms of both performance and accountability – and it was her job as Prime Minister to find a way to make that happen. Constrained by intra-party tensions during her first government (1979-1983) her capacity to intervene was limited to introducing fees for overseas students in 1981. By her second term in office – 1983-1987 – Mrs Thatcher was in a far stronger position in terms of her grip on the Conservative Party and could therefore use this stable foundation in order to institute more radical measures.

Lacking direct control capacity the obvious lever for affecting change lie in relation to public funding. In short, if Mrs Thatcher (or, more precisely her government) could not easily impose reforms on the governance of universities she could impose requirements and controls upon the distribution and use of public money. The shift was therefore one of meta-governance (i.e. ‘the governance of governance’ or ‘rules of the game’). The University Grants Committee (UGC) had existed since 1918 with a remit to act as a buffer between higher education and the government of the day. Its main role had been to distribute block Treasury grants to universities (with the remainder of university funding coming from tuitions fees paid in full one every student’s behalf by his or her local authority). The UGC was, Thatcher believed, a committee made-up of academics to distribute large amounts of public money to academics and although the body did oversee the implementation of the first ‘Research Selectivity Exercise’ in 1986 it was abolished in 1989 with its power transferring to a new University Funding Council (UFC) on which academics were a minority.

The shadow of central government control had therefore become far tighter and the UFC oversaw the introduction of a classic NPM framework involving contracts, performance monitoring and league tables. With the benefit of hindsight it is possible to suggest that one historical tragedy was the failure of British universities to divert, subvert or shape that Thatcherite revolution; their general position of obstinacy combined with predictions of impending crisis that ultimately proved woefully inaccurate simply reinforced the Conservative governments’ belief that universities leaders were out of touch and radical reform was necessary. And while possibly not immediately interpreted as a radical act the introduction of the first Research Selectivity Exercise (RSE) in 1986 can now be viewed as a crucial initial wedge or crack in the governance of universities that has subsequently developed into a core feature of academic life in the UK. The degree to which this initial ‘crack’ or ‘wedge’ has been expanded is clear from the manner in which the RSE involved the UGC creating subject specific sub-committees that would review just five research outputs – books, articles, papers, etc. – from the past five years on which the department in question would be ‘content for its total output to be assessed’. In addition to these outputs departments were invited to submit up to four pages of general description about their research strengths. This really was ‘light touch’ to the extent that one subject committee was so confident that it already knew all it needed to know about each university’s departmental quality, it produced a provisional classification before it received any submissions and ‘when it got all the extra evidence it saw no reason at all to alter any of the classifications’.1 As a point of comparison the most recent assessment (REF2014, discussed below) required four outputs for every member of staff returned, plus extensive datasets on a whole range of topics, plus a range of environment documents plus a number of independently verifiable ‘impact case studies’.

It is therefore possible to identify a rather rapid process of ‘regulatory creep’ with all the usual unintended bureaucratic and organisational implications. The existence of an academic ‘expert body’ (i.e. the UGC) controlling a process that was implicitly designed to shed light on the previously murky world of academic funding allocation did not go unnoticed. In 1986 This was exactly the insider-elite sort of horse-trading, pork-barreling that Mrs. Thatcher was so personally committed to abolish across the public sector. By 1989 the UGC had been abolished and replaced by a ‘non-expert’ UFC and by the 1989 exercise – now labeled the Research Assessment Exercise (RAE) – universities were permitted to return two pieces of work for each member of staff and information was also sought on the total volume of a department’s research outputs. Other changes involved as shift from the original 37 subject specific sub-committees were replaces by 152 subject units assessed by nearly 70 panels who would, in turn, apply a new five-point scale to assess the quality of research. (Interesting a recommendation made by the chief executive of the UFC for non-academic impacts achieved by each department to be evaluated and therefore incentivized was rejected.)

The decision to end the binary division between research-focused universities and teaching teaching-focused polytechnics in 1992 created new challenges for the assessment of research. Some ‘new’ universities were clearly committed to developing research-related reputations in some fields but the overall pot of research funding was not going to be increased. The RAE therefore had to become far more robust and rigorous which, in effect, meant the rapid creation of a body of administrative law around higher education research assessment. Some of the decisions that were produced by the 1992 RAE were subject to challenge in the courts and although the UFC successfully defended their decisions the view of the judiciary was clear: academics could no longer make decisions of what was ‘good’ or ‘bad’ research based upon their claimed professional expertise and subjective judgements.

The response was a doomed attempt to replace normative judgement with administrative and technical precision. Rules, regulations and methodologies were codified and procedures and processes formalized but all this achieved was an ever-greater bureaucratic burden on universities, research managers and academics. Peer review by specialist disciplinary panels remained the core method for assessing research quality. Moreover, as the results of RAE developed important (indirect) league-table implications for the recruitment of students, particularly international students, it became clear that a football-like transfer market was emerging within higher education. Inflated salaries could be demanded by a small number of research-intensive scholars who would, in turn, secure research-only positions that effectively ensured their ‘elite’ status, insulated them from pressures (such as teaching or administration) that might threaten their position and thereby reinforced their high-market value. By the 2001 RAE new rules were therefore being implemented about universities ‘poaching’ staff towards the end of an assessment period in order to claim outputs that had in reality been researched and written at a different institutions. By 2008 the situation had become even more complex with an attempt to attempt to disaggregate departmental performances. Prior to this date all departments received a simple assessment grade – with 1* being the lowest and 5* the highest – but there was widespread game-playing in the sense that the overall grade could hide a multitude of weaknesses within a department. Many departments would have ‘a long tail’ in the sense of a fairly large number of staff who were simply not research active or undertaking work that was deemed of insufficiently quality. In 5* departments these ‘long tail’ staff would effectively be over-graded and over-funded because the department received (and were funded based upon) a flat score (i.e. 4*). Conversely in a largely teaching-focused department that did have a small number of excellent research active staff these academics would be unfairly penalized (and under-funded) by being captured within the overall grade of a weak department.

The answer was to adopt a more refined process based upon ‘quality profiles’ that reflected the performance of all staff and made more refined calculations on the basis of excellent research performance even if it was found in relatively small pockets. The aim being to encourage dynamism and to penalize those departments that did in effect carry ‘a long tail’. The problem was that as the research assessment process became more ‘robust’, ‘fine-grained’ and ‘professional’ it also became more demanding upon academics and universities in terms of both administrative costs and emotional distraction. The benefit of this historical ‘long view’ is that it provides an almost perfect representation of Parkinson’s Law of Bureaucracy – that every reform to reduce bureaucracy and increase organizational performance will inevitably have the opposite effect. The RAE had become ‘the tail that wags the dog’ (a typically befuddling English phrase that simply means that a secondary or subservient object, process or operation is in fact dominating an issue). A major review was initiated under the chairmanship of Professor Sir Gareth Roberts in 2003 and led to the recommendation that teaching-focused universities be given the opportunity to opt out of the RAE in return for a guaranteed base level of funding. This was rejected by the institutions it was intended to help due to a concern by them that taking such an ‘opt out’ would send out the wrong message to potential students and research funders. Put slightly differently, the Roberts’ ‘opt out’ was interpreted as reflecting a lack of ambition by any university who opted for it and in an increasingly aggressive and market-driven environment this could be a suicidal strategy.

Even the efforts of the Treasury failed to trim the bureaucratic costs of research assessment and in 2008 the government estimated that simply participating in the exercise was costing English (note, not Irish, Welsh or Scottish universities) nearly 50 million pounds of public funding that could otherwise have been dedicated to the primary tasks of the institutions (i.e. Research and teaching). The Treasury did, however, isolate a new option in the form of metrics that could in theory reduce the bureaucratic burdens on universities. This would involve the adoption of a set of metrics such as citation statistics, journal Impact-Factor scores, and other quantitative methods as proxies for research quality therefore removing the need for time-consuming and elaborate procedures for peer review that were in themselves highly normative. The constant analysis of specific metrics could even remove the need for five-yearly research assessment cycles and provide more accurate and up to date information on which funding decisions could be made. In 2006 the then Chancellor of the Exchequer, Gordon Brown, made a surprise announcement that the next RAE was to be a metric only exercise and that it would be for the Higher Education Funding Council for England (HECFE) to decide the specific details for this process. Unfortunately the Chancellor had not forewarned HEFCE (which had itself been created in 1992 to assume the functions of the UFC) of this announcement and a period of intense confusion reigned until a compromise situation was agreed whereby although it was too late to change the criteria for the 2008 RAE the 2014 exercise – now retitled the ‘Research Excellence Framework’ (REF) in an attempt to escape from some of the negativity that had emerged around the RAE – would for the first time include a new ‘impact’ component in return for the Treasury dropping its proposals for metrics.

Table 1. The Evolution of Research Assessment Exercises in the UK, 1986-2014.

full image

The meta-governance of research funding within higher education has therefore been transformed since the mid-1980s from essentially an internal, informal and elitist system of financial distribution (i.e. the UGC) through to the external imposition of an incredibly extensive audit and assessment framework with huge associated costs. The evolution of this framework was criticized by the unions as little more than the advancement of marketization into the university sector but was reluctantly accepted by university leaders who seemed almost unable to frame a coherent response or to influence the agenda in a manner that may have smoothed some of the rough edges of the process. However, before examining some of these costs and ‘the politics of’ this process in more detail, and therefore how the nature of academic life and scholarship has and is changing in the UK, it is worthwhile briefly charting the REF2014 framework and results. With this in mind, possibly the most significant element of REF2014 was the introduction of a significant assessment component for the social impact of scholarship (see Figure 1, below). Each department or unit would now have to deliver a number of ‘impact case studies’ that could be independent verified and clearly demonstrated the link between research outputs and some significance element of non-academic impact. This was significant for a number of reasons but not least due to the simple fact that ‘impact’ was a new and potentially game-changing part of the assessment process. Most institutions had become adept at managing the publication profiles of its staff, of managing the existence of ‘long tails’ and making claims regarding the existence of a dynamic, collegial and stimulating research environment. Demonstrating non-academic impact was a new piece of the assessment framework that would dwarf the marginal gains delivered by tinkering with publications management and could therefore transform the results and subsequent league tables. How exactly the introduction of this major new component could be reconciled with REF2014’s stated objective of ‘[reducing] significantly the administrative burden on institutions in comparison to the RAE’ was unknown but would (perhaps not surprisingly) surface as a major issue in the wake of the process.

Figure 1. The REF2014 Assessment Structure.

full image

In order to fulfil the assessment framework each ‘unit of assessment’ (i.e. a department or part of an academic school that wished to be assessed in a certain discipline) was expected to make a formal submission consisting of five main elements. Part 1 related to information and data regarding the number of staff that were being submitted (proportions, exemptions, etc.); Part 2 detailed the publications (up to four) that were being submitted for assessment by each member of staff. Part 3 then required the submission to describe the unit’s approach to enabling non-academic impact from its research and case studies providing specific examples. Part 4 harvested a range of data about the broader research environment of an institution such as the number of research degrees awarded, research income, etc. Part 5 featured a narrative statement prepared by each unit about the research culture, environment and momentum that was in place (plus strategies for development in the future). As Table 1 illustrates, the scale of the exercise was extensive with 154 universities making nearly 2,000 submissions involving over 50,000 staff. The results were interesting in general terms and very positive for Politics and International Studies as a discrete discipline due to the manner in which more than sixty-eight per cent of the overall research quality of the discipline was assessed as 4* or 3* (i.e. either ‘world-leading’ or ‘internationally recognised’). When the various percentages are combined to produce a grade-point average (GPA) – a simple measure of the overall or average quality of research, which takes no account of the number or proportion of staff submitted -the overall score for the discipline of 2.90 reflects a marked increase on the comparable score of 2.34 in the 2008 RAE. Moreover, nearly all politics departments witnessed substantial improvements on their 2008 scores, with those at Leeds, Strathclyde, Southampton, Westminster and York enjoying the biggest increases. And, on the basis of the discipline’s GPA score of 3.22 in the specific area of ‘impact’, political scientists demonstrated that their research has real-world meaning and relevance.

Table 2. REF2014 Politics and International Studies: Top Ten Institutions.

full image

Not surprisingly, however, a number of alternative formulae have been developed in order to tease out the deeper insights of the basic REF 2014 data. ‘Research power’, for example, relates to issues of scale and provides a measure of volume of research multiplied by quality. The effect, as shown in the first column of Table 3, is to reward the largest departments, with King’s College London jumping to top of the rankings thanks to the 98 researchers submitted to the Politics and International Relations sub-panel. ‘Research Intensity’ takes into account the proportion of full-time staff that were returned to the REF2014 process by a department and therefore attempts to correct for strategic submissions in which a significant number of staff are left out of the audit process.

Table 3. REF2014 ‘Research Power’ and ‘Research Intensity’: Top Ten Institutions.

full image

Debates concerning the most appropriate or credible way of understanding and presenting research assessment data has almost spawned its own sub-field of political science. The simple facts are that: (1) no research assessment process is perfect; (2) different formulae will inevitably produce different results; and (3) institutions will cherry pick those interpretations of performance that benefit them the most. But the problem is that the debates go far beyond the analysis and presentation of the results and down into procedural issues of ‘measurement controversy’ over how specific outputs are graded by a REF sub-panel (procedures for the ‘double-weighting’ of books, for example); through to ‘patronage controversy’ over who was appointed to serve on or chair the sub-panels; and ‘sampling controversy’ over who was selected by departments to form part of the submission, and on what basis. Some departments were inclusive and returned almost 100 per cent of staff on the basis of a mixture of collegiality and confidence. In some cases, ‘universal returns’ were the product of a failure of senior staff to make tough decisions and ‘blaming the REF’ became a useful lightning-rod for long-standing institutional weaknesses. In other departments a rather centralized and uniform decision-making system was imposed whereby anyone with outputs that were deemed to fall below the 3* threshold were simply not returned. Under ‘research intensity’ those departments that were more selective fell back down the rankings but the long-and-short of it is that due to institutional selectivity the units of assessment were not being assessed on ‘a like-for-like’ basis as that would have required all units to return all eligible staff.

What then does this largely descriptive account of research events in the UK tell us about the tragedy of political science? The main answer to this question must be that there has been no one singular ‘tragedy’ and it might therefore be more appropriate to explore the existence of an inter-woven set of tragedies.

  • Tragedy 1: The inability of British university leaders to influence, shape, moderate or control the evolution of increasingly bureaucratic research assessment processes since 1986 [T1].
  • Tragedy 2: The manner in which playing the research assessment ‘game’ has arguably become more important than promoting the vibrancy of scholarship itself [T2].
  • Tragedy 3: The failure of political science to utilize the insights of the discipline in order to challenge the imposition of an assessment model that was infused with neo-liberal values [T3].
  • Tragedy 4: The manner in which political science has (and is) going MAD [T4].
  • Tragedy 5: Disappointment in the sense that political science has not developed a ‘new politics of political science’ in order to turn challenges and problems into positive opportunities for the discipline [T5].

It is neither possible nor necessary to examine each of these tragedies in turn apart from noting the manner in which some relate to the broader qualities of higher education in the UK and are not discipline-specific (T1, T2) whereas others are more disciplinary focused (T3, T4, T5). At the broadest level there is little doubt that the professional representation of higher education to the government (i.e. its ability to speak to power with one clear and loud voice) has been and remains hampered by the existence of a number of university groupings that, in effect, attempt to protect the interests of their members rather than of the university sector as a whole. This was to some extent acknowledged with the creation of the Council for the Protection of English Universities in 2012 but the pressure politics landscape for higher education remains fragmented and therefore diluted. The second tragedy is hard to substantiate in solid, data-driven form but as C Wright Mills argued, ‘Scholarship is a choice of how to live as well as a choice of career; whether he knows it or not, the intellectual workman forms his own self as he works toward the perfection of his craft… you must learn to use your life experience in your intellectual work’. With this sentiment echoing in my mind I can say with some confidence that a second tragedy, particularly for political science, is the manner in which it has allowed itself to become – whether it admits it or not – a REF-driven discipline. It is far from unique in this regard but there is something slightly more troubling about a discipline that was born with a commitment to engaged scholarship and contributing to the broader health of democracy being so easily and compliantly trapped within an external assessment process. It might be thought that those full-time students of politics who stake their claims to professional respect and credibility on having a sophisticated grasp of both politics ‘in theory’ and politics ‘in practice’ might have been slightly better equipped to shape, respond and in some cases reject some of the external pressures that have been brought to bear. To some extent that professional collective spirit was being undermined by the introduction of a brand of managerialism in which academics and academic institutions were implicitly incentivized to compete and not to share best practice, research capacities, impact networks, etc. That is not to say that the shared public ethos of British universities was or has been wiped away but it is to make a strong argument that it has been eroded and replaced by an ever more aggressive form of ‘gaming in targetworld’.

Phrased slightly differently it could be argued that one of the tragedies of political science is therefore how it has succumbed to a form of professional MADness. MAD in this sense being an acronym for the phenomenon known as ‘multiple accountabilities disorder’ which hollows-out and undermines institutions or disciplines by ensuring that their time is spent accounting to an ever growing number of political, professional, regulatory and bureaucratic organisations to the detriment of being able to focus on their core and primary tasks. Failure, frustration and disillusionment are therefore almost the guaranteed symptoms of going MAD. The final tragedy is therefore one that focuses on the adaptive capacity of political science in the sense of developing a ‘new politics of political science’ that is vibrant and sophisticated and recognizes both the opportunities and challenges for the discipline presented by the changing contextual landscape. This ‘new politics of political science’ is something that will be discussed in more detail in Part III (below) but the next section explores some of the unintended consequences of the research assessment framework in the UK.

2. Unintended Consequences

The aim of this article is not to offer a polemical critique of the research assessment process in the UK. There is no doubt that the evolution from RSE to RAE and most recently to REF has delivered some positive outcomes in terms of acting as a driver of research quality, delivering greater public accountability and the opportunity to lever new funding resources through partnerships. Within organisations research assessment has also led to the recalibration of resources in order to maximize the value of funding in an increasingly constrained financial environment. Whether this process is viewed as ‘fine tuning’ or crude short-term intellectual engineering is a matter of intense debate but there seems little doubt that there has been an overall upturn in the quality of the UK research base (e.g. articles in the top 1% of citations up from 11% in 1996 to 16% in 2012). And yet to assume an obvious causal link between the introduction of external research assessment processes and these performance based statistical indicators arguably reflects the nature of the problem – the adoption of an incredibly narrow, technical and arguably self-defeating view of scholarship. This is, if anything, the deeper tragedy that risks polluting each and every discipline due to the almost dampening effect that the assessment process can have on what C Wright Mills would call ‘the sociological imagination’ – that intellectual spirit of curiosity and freedom, the ability to trespass across inter-disciplinary and professional boundaries, a belief in the innate value of knowledge and learning without needing to rationalize each and every module against the demands of the economy. When stripped down to a core and basic conclusion the main unintended consequence of the research assessment process has arguably been the imposition (and academic acceptance) of a brand of academic managerialism that is almost designed to squeeze-out intellectual innovation, creativity and flair in favour of a ‘tick box’ ‘REF-return-first’ mentality.

This is, of course, my own and highly personalised view of the impact of research assessment processes in the UK. It could be completely wrong but I would argue that there is sufficient evidence to underpin my position. Indeed, it would be possible to make an even stronger argument and suggest that ‘the politics of the RAE, REF (or whatever it will be called in the future)’ has never been sufficiently exposed in ways that combine to facilitate a fundamental challenge of the process itself. That is not to say that some form of research assessment is not completely legitimate in light of the public funds committed to university research or that such processes cannot have positive outputs and outcomes. It is, however, to suggest that the experience of the UK provides a salutary tale of a process of bureaucratic creep, accretion and sedimentation to the extent that its impact upon research and universities risks becoming dysfunctional – ‘MAD’. The aim of this section is therefore to shed light on the ‘hidden politics’ of research assessment in the UK but in many ways this is just the precursor to a far larger argument about the changing nature of academic life that is made in the next and final section. What then does the ‘hidden politics’ of research assessment look like? What are its main components? Table 4 provides some answers to these questions and the remainder of this section looks at each of them in turn.

Table 4. Unintended Consequences: The Politics of Research Assessment in the UK.

full image

The main issue to understand from Table 4 is that none of the themes are isolated issues, they are interwoven into the fabric of the research assessment process and to some extent they are the natural consequences of the imposition of a crude bureaucratic structure upon higher education. Take, for example, Theme 1 ‘bureaucracy’ – the original architects of the Research Selectivity Exercise had no intention of creating a system of assessment that would by 2014 cost universities around a quarter of a billion pounds (£246m to be precise) to administer. That the 2008 RAE imposed an administrative burden of around £66m on universities provides some sense of the manner in which a reform that was intended to increase organizational efficiency and effectiveness has actually spawned a bureaucratic leviathan. And yet to some extent the research assessment process is actually no longer simply about ‘research’; the league tables and rankings that are generated from the assessment process have actually become more like proxies of overall institutional standing that, in turn, are critical in terms of the recruitment of international students and recruiting the very best academic staff. This is a critical point: the politics of research assessment has expanded far beyond research itself.

This flows into our second themes and the notion of shadows (T2, Table 4). To some extent Mrs Thatcher’s initial foray into increasing central government’s grip on British universities was a classic example of the manner in which governance really does take place in the shadow of hierarchy. But the shadows of the research assessment process are particularly long and distinctive in the sense that not only do departments become almost ‘REF-driven’ to the extent that all procedures and processes are designed to (implicitly or explicitly) feed into a carefully managed REF planning process but that the rules, expectations and standards of the assessment process are to some extend imbibed by those institutions. Recruitment panels do not appoint the ‘best’ candidate but the ‘safest’ candidate when assessed through a REF lens; decisions about the use of new resources or funding are rarely taken on the basis of pure unadulterated intellectual ambition but on the basis of providing an evidence base for claims that were either made in the environment statement (i.e. the ‘REF5’ document within submissions) of the previous assessment or might be made in the next. The sphere of scholarly thinking has, I would argue, narrowed as a direct result of the research assessment processes that have cast an ever greater and direct shadow over the nature of higher education in the UK. There is, so it is said, a silver-lining to all clouds but when it comes to shadows I am told they are completely dark and in relation to research assessment there is a dark side that has received incredibly little open discussion – the impact of rejection (T3, Table 4).

What happens if your research is judged to be of an insufficient quality to form part of an assessment return? The formal position has always been that RAE/REF processes are completely separate to institutional promotion systems but the reality is far more complex. Rejection (i.e. Theme 3, Table 4, above) can have significant career implications. As Tables 1, 2 and 3 each in their own ways demonstrate, different universities and departments have come to very different conclusions about the inevitable quality-quantity trade-off that any exercise like RAE/REF inevitably brings with it. The rational actor model would incentivise a unit ‘going tight’ and putting in the smallest number of staff with the highest perceived quality rating (i.e. focusing down on the narrow GPA score and ranking); however, an equally rational actor model might consider that the short-terms gains of ‘going tight’ did not outweigh the ‘long-term’ gains of ‘going broad’ in terms of potential ‘research power’ and ‘research intensity’. But there is another reason for being inclusive in research assessment planning in the sense that ‘cutting off a tail’ in the sense of rejecting members of staff from a submission is potentially an incredibly divisive decision. Moreover, those staff who do not ‘make the cut’ (usually at the 3* border) are inevitably likely to face potentially unfair knock-on consequences from this decision. ‘If Professor X was not returned at the last REF why should we want to appoint them?’ If Dr. Y’s research was not viewed as being REF’able then on what basis should they really be considered for promotion?’ There are, of course, lots of reasons beyond a scholar’s relationship with a fairly arbitrary five-year research assessment process should not prevent them either being promoted or moving institutions but – just has occurred at the wider institutional level – it is possible to suggest that an individual’s REF status has assumed a far broader significance as a proxy of overall scholarly status.

The problem with this development is that whether the Research Excellence Framework actually identifies and rewards ‘excellence’ (i.e. T4, above) is a moot point (but one that rarely finds expression in open academic debates). The research assessment frameworks in the UK prioritise and therefore incentivize a very specific definition of research excellence that is generally a narrowly scientific idiom encased in verbosity and jargon and that speaks to a tiny scholarly audience. To publish in the types of scholarly outlets that are likely to be highly prized in the assessment process is to narrow ones focus to a level of hyper-specialisation or methodological masturbation. Peer review is taken as a sign of quality despite the well-known risk-averse, conservative predilections of such processes and in this context single-author books become (ironically) almost risky, especially if claims for double-weighting are rejected. Contributions to edited collections are the intellectual equivalent of persona non grata as are generally articles in special editions of journals (due to concerns about the rigor of review processes around commissioned articles on a specific theme). It is therefore with a mixture of great sadness and regret that I cannot help but agree with the argument made by Michael Billig in his book Learn to Write Badly: How to Succeed in the Social Sciences (2013). As Chair of the Political Studies Association of the UK I received several informal complaints and request for advice in the wake of REF2014 from academics whose research had been assessed at achieving the 3* standard that was widely used as an institutional boundary for submission but they had still been left out of their University’s submission to the Politics and International Studies Panel due to a perception amongst senior staff that it did not quite ‘fit’ the profile the institution was trying to offer. The research might not have been published in the ‘right’ journal or might actually be adopting an unorthodox position in relation to major themes and issues. In some cases universities made strategic decisions to submit eligible political scientists to cognate assessment panels – such as Area Studies or Business and Management Studies – due to a belief that these were ‘softer’ sub-fields in terms of assessment.

The point being made is simple: although research assessment processes have undoubtedly incentivized a strong focus on research and publication within higher education, the definition of ‘excellence’ is arguably fairly narrow. It deifies a specific type of scholarship to the detriment of other equally valid forms of research (a point discussed in some detail in Part III, below). The impact of this – to come to our fifth theme (T5, Table 4, above) – is that scholars who do not or refuse to work within this fairly narrow idiom of highly technocratic impenetrable scholarship are put at a significant disadvantage. And yet what was unique about REF2014 was the inclusion of an explicit component of assessment based upon the non-academic value, social impact or public value of research. In many ways the introduction of an impact component, as demonstrated through the submission of ‘Impact Case Studies’, was an attempt to re-orientate research back towards having some applied, engaged or real-world relevance. The challenge, however, is that the dominant notion of impact was derived from the STEM subjects (i.e. Science, Technology, Engineering, Mathematics) and embraced a rather simple and linear process of knowledge production through to knowledge application that can be traced and demonstrated through the creation of new products, patents or medications. Only very rarely does the social sciences have such direct and clear impacts on society and yet the research assessment process is almost forcing scholars to either gravitate their choice of research topics to those where demonstrable impact might arise or to play a more dubious game in the sense of making rather doubtful claims about the links between a specific research project and some claim to socio-political change or legislative amendment.

Once more the argument is not that the requirement to demonstrate the non-academic impact of publicly funded research is a ‘bad’ thing – many of the impact case studies submitted to the Politics and International Studies Panel offered convincing narratives of positive social engagement. Nevertheless there is a need to be aware of the risks of politicizing political science by over-incentivising user-engagement around a fairly narrow definition of impact which itself must be linked to a fairly narrow definition of research ‘excellence’. The creation of perverse incentives is actually likely to stimulate a set of strategic responses that political science, notably within the fields of public administration, governance and public policy, has spent several decades studying and warning against. Extensive ‘gaming’ of the research assessment framework is therefore the sixth (T6, Table 4, above) unintended consequence and takes a number of forms from ‘buying in’ research grants and publications through high-level appointments, by closing down departments or units where staff are viewed as never going to be able to play the ‘REF game’, using rolling-temporary contracts to reduce the number of formally ‘eligible’ staff. Other elements of gaming include basing returns not on assessments of research quality but on assessments of how many viable impact case studies a unit has and working backwards from that to assess the optimum of staff that should be submitted (a classic example of ‘a tail wagging the dog’). The selective submission of only a small proportion of staff is one of the most common gaming strategies as is hiring a number of overseas research ‘superstars’ on fractional contracts in order for them to be able to be returned within the hiring institution’s submission. This is generally a very positive development for the overseas scholar who is effectively ‘double-dipping’ in terms of the utilization of their research but it is bad news for early career researchers who cannot get tenure or even a first step on the professional career ladder. As the next and final section will highlight, one major element of this gaming is that academics whose research is deemed to be only ‘recognised internationally’ (i.e. 2* or less) may be pressurized into accepting teaching-track positions in order to make them illegible for external assessment processes.

These pressures are particularly problematic for scholars who work at the nexus or intersection of different disciplines. The research assessments in the UK have always adopted traditional disciplinary silos as their main tool of sifting and assessing research and this is a major problem for inter-disciplinary or simply less orthodox scholars who wish to range across intellectual landscapes. This is a particular puzzle given the emphasis of the UK funding councils in research years and their emphasis on encouraging inter-disciplinary research because those scholars who do actually respond to the signals, take risks and refuse to be intellectual pigeon-holed then find themselves defined as ‘high risk’ in assessment terms. Therefore the politics of research assessment contains a whole set of embedded inequalities that almost prevent an open, dynamic and inclusive approach to intellectual diversity – at exactly a point in history when such approaches are badly needed. Moreover these inequalities are not just disciplinary. The research assessment process arguably maintains a set of gender-based and ethnicity orientated inequalities that have not yet been the topic of sustained analyses or discussion. Long-standing concerns about political science in the UK in terms of social representation and diversity were to some extent replicated within REF2014 returns. Women were less likely to be returned than me, as were scholars from black or other ethnic minority backgrounds. Male professors submitted more monographs, female professors more co-authored articles. The fragmentary force of external research assessments upon the discipline and upon higher education is the focus of the next and final section but before proceeding to that topic is it necessary to comment upon the ninth and final theme from Table 4 (above) – ‘over-steer’.

One of the most important insights from recent experience in the UK is that research assessment has not evolved as research evaluation exercise: it has evolved into a powerful incentive system that sets the ‘rules of the game’ (the meta-governance) that institutions feel they must play. It is not a survey and evaluation of research outputs but has come to signify a proxy rating of institutional excellence. The language and terminology of REF was particularly significant in the sense that it was a ‘framework’ (i.e. a permanent incentive structure intended to shape the sector) rather than a more isolated or discrete ‘exercise’ as was the case with the RAE. Furthermore, it could (and has) been suggested that the dominant interpretation of ‘excellence’ encourages a scholarship of risk-averse mediocrity rather than a scholarship of discovery that challenges foundational ways of understanding the world. But in many ways the introduction of research assessment processes has certainly succeeded in its core aim of encouraging universities to think about the management and governance of research funding. The unintended consequence, however, was a perception that an ‘over-steer’ had occurred within the sector whereby research became the focus and teaching became almost a nuisance or a distraction, something to be avoided or undertaken at the lowest common denominator in order to maximize research focus.

Unsurprisingly, whether this ‘over-steer’ has actually occurred and whether students are actually disappointed in the standard of teaching they have received is a contested issue. My own personal experience over the past twenty years would definitely lead me to support an argument that suggested research was very much the primary focus within the main established universities in the UK. That does not mean that teaching standards were not upheld or that academics did not attain a huge amount of satisfaction from teaching but it is to admit that the realpolitik of university life meant that tenure and promotion were driven by research assessments not teaching evaluations. Teaching did not enjoy equal status with research but was almost a second-class endeavour. The perception of the current Conservative government in the UK is certainly that a significant degree of rebalancing is required and in November 2015 the Universities and Science Minister, Jo Johnson, announced plans for the introduction of a Teaching Excellence Framework (TEF). The aim of this new initiative being to ‘build a culture where teaching has equal status with research, with great teachers enjoying the same professional recognition and opportunities for career and pay progression as great researchers’. Not surprisingly the announcement that in future the REF would be partnered by a parallel teaching-focused assessment called TEF was not met with rejoicing in the lecture theatres or seminar rooms. Even students were unconvinced that there was a major problem with teaching standards that required such potentially drastic action. The government has promised to ensure that the TEF is a ‘light touch’ review process but similar commitments were made about research assessment when it was first introduced in the mid-1980s. Moreover, the politics of TEF has links to broader concerns about the impact of REF in terms of increasing central government control over universities and facilitating a market-based managerialist logic. The introduction of TEF is therefore attached to plans to lower the bar to ‘market entry’ in order to allow new universities to emerge; as well as potentially allowing institutions to increase tuition fees where they are assessed to be delivering a particularly high standard of teaching.

The introduction of research assessment in the UK has therefore led to a range of ‘negative externalities’ that have come as no surprise to scholars of public administration or regulatory governance. The tragedies of political science – like so many other disciplines and with the universities as a collective institutional endeavor – revolve around a failure to mount a politically astute strategy that may have framed or managed the imposition of these external pressures in a more appropriate, sensitive or proportionate manner. And yet the final argument of this article is that it is very difficult to understand the impact and implications of research assessment without having some broader grasp of how it forms just one element of a ‘bigger picture’ that highlights a set of issues that when taking together focus attention on a potentially catastrophic tragedy for political science in the future. That is the unbundling or unraveling of the discipline.

3. Gaps and Splinters

In order to fully understand and expose the politics of research assessment it is necessary to stand back from a focus on tools of research assessment or specific governance frameworks in order to reflect upon how this topic sits within a far broader professional profile. By this I mean the manner in which the nature of scholarship has and is changing and therefore how the notion or meaning of being a ‘University Professor of Politics’ – to use the phrase adopted by Bernard Crick in his ‘Rallying Cry to the University Professors of Politics’ that formed a new part of the second edition of his classic In Defence of Politics in 1981 – is also changing. How can scholars understand their role within and beyond academe? Why and in what ways have the professional pressures placed upon academics altered? What can they do to stop themselves going MAD? To make this apparently flippant statement about madness is actually to provide a link into an important topic – the rise of mental illness amongst UK academics. A study of academics discovered that job stresses had increasing significantly in recent years and levels of job satisfaction and professional support had declined (see, for example, Kinman and Jones 2010). What this seam of scholarship reveals is that the introduction of research assessments in the UK is just one part of a broader story concerning the gradual imposition of an ever-expanding array of expectations and responsibilities upon university staff. I sometime find myself envying former colleagues who now enjoy a more leisurely existence as emeritus professors and who held tenured appointments when the pace of academic life was certainly slower. This ‘slowness’ may well have been exactly what Mrs Thatcher interpreted as a rather over-protected and under-productive charmed scholarly life but my deeper concern rests with the manner in which scholarship is being stretched to breaking-point and one way of understanding and conceptualizing this role expansion is through the notion of an expectations gap (Figure 2 below). As the most eagle-eyed reader will immediately have spotted, this is a rather simple heuristic that is never likely to be judged complex enough for a REF’able piece of work. But the simplest frameworks are often the best and in this regard Figure 2 illustrates how a ‘gap’ might be formed by the variance between the realistic level of capacity given the available resource package (i.e. lower bar) and the public or political expectations placed upon an individual, organisation, community, discipline, etc.

Figure 2. The Expectations Gap.

full image

It could be argued that the existence of a small ‘expectations gap’ may well be positive in the sense that it encourages ambition, reflects external confidence, forces institutions to consider innovations and adaptations, etc. And yet the existence of a large expectations gap also risks becoming pathological in the sense that institutional overload and burnout become real risks. Placed in the context of academe, in general, and political science, in particular, Figure 2 encourages a form of ‘gap analysis’ whereby the demands and pressures placed upon academics and their disciplines (i.e. upper bar) is assessed against some reasonable conception of realistic capacity (i.e. lower bar).

As already mentioned, the breadth of this article in terms of ‘the future of political science’ embraces a broad range of countries, sub-fields and institutions. The pressures on predominantly teaching-only universities or liberal arts colleges, for example, are likely to be very different (but not necessarily less) than those facing Ivy League, Group of Eight or Russell Group Universities in the United States, Australia and United Kingdom (respectively). Indeed, the ‘expectations gap’ might be quite different in nature or size in different parts of the world or between different parts of the higher education landscape within a polity. But the simple fact is that from Sheffield to Sydney vice chancellors are increasingly speaking out about the existence of an untenable gap between supply and demand (see, for example, Burnett, 2016).2 In this context the options for closing the gap include:

  • Option 1: Increasing Supply (moving the bottom-bar up);
  • Option 2: Reducing Demand (moving the top-bar down);
  • Option 3: A Combination of Options 1 and 2 (closing the gap from above and below)

The argument in relation to the UK is that an ‘expectations gap’ as emerged within British higher education and that this is having a splintering affect upon academic careers that has not been fully acknowledged. The simple position is that over recent decades the upper bar has been pushed upwards without a significant increase in resources. Higher education expansion underlines this claim. In 1950 just 3.3 per cent of young people in the UK went to university; by 1970 the rate was 8.4 per cent; and in 2015 the rate was nearer fifty per cent (over half a million young people taking up a university place). In the 1960s and 1970s small group teaching would generally take place in an academic’s office and involve no more than a handful of students; in the 1990s small groups had expanded to ten or twelve students; and today small groups are often closer to twenty-five or thirty students in number. (The one-to-one tutorial system that has been at the heart of Oxbridge teaching system for centuries is under increasing financial strain.) One early impact of the TEF is that universities have engaged in almost a bidding war to increase levels of teaching contact time for students that will have obvious knock-on consequences for staff research capacity. The research assessment processes therefore form just one element of this gradual process of role accretion or sedimentation. Take, as a starting point, the five main components of an academic position in a British university:

  1. Research: As displayed through international peer-reviewed publications and significant external research grant income.
  2. Teaching: Evidence of excellence in teaching as displayed through student feedback and external audit processes.
  3. Administration: The capacity to undertake significant administrative and managerial responsibilities within and beyond your home department.
  4. Impact: The ability to demonstrate that your research has achieved a clear, direct and auditable ‘impact’ on non-academic research-users and/or the public.
  5. Citizenship: A clear contribution to professional ‘good citizenship’ through activities such as journal editing, external examining, pastoral responsibilities, government or parliamentary service, leadership of learned societies, etc.

To undertake world-class 4* research, to demonstrate ‘excellence’ in relation to the teaching of evermore demanding students, to successfully apply for competitive research funding and fellowships while also managing an ever-increasing bureaucratic burden… while also delivering ‘impact case studies’ that could withstand almost forensic analysis as to their veracity and showing evidence of professional engagement beyond your own university. The new work demands in higher education are possibly becoming untenable and to some extent the splintering or fragmentary effect that forms the focus of this final section is a fairly obvious consequence. For those readers who think that it is me that is over-inflating the contemporary situation in the UK it is worth thinking in a little more detail about the expectations placed upon early career researchers in political science:

  • To trespass across disciplinary and professional boundaries while also displaying increased hyper-specialisation;
  • To enjoy ‘academic autonomy’ and ‘intellectual freedom’ in an increasingly directive and constrained environment;
  • To increasingly engage with quantitative methods and ‘big data’ while also producing nuanced, accessible and fine-grained analyses;
  • To manage the temporal misalignment between academe timescales and politics in practice;
  • To be able to ‘talk to multiple publics in multiple ways’ while acknowledging a constant pressure to ‘tech-up’ within political science;
  • To cope with a system where the incentive structure still pushes scholars towards ‘pure’ scholarship and peer reputation rather than ‘applied’ scholarship or public reputation;
  • To navigate the problematic relationship between facts and values, and the prevailing rhetoric of neutrality in research;
  • To innovate and share ‘best practice’ while also working in a competitive market environment;
  • To deliver world-class research and writing while also providing excellence in teaching;
  • To provide a personalized student-centred learning experience in a climate of mass and often digitally refracted access;
  • To take risks in what is generally a risk-averse professional environment;
  • To balance a traditional focus on ‘problem-focused’ political science with external demands for ‘solution-focused’ political science;
  • To ensure that research informs public debate without being ‘dumbed down’ or co-opted by partisan actors;
  • To be responsive to ‘students-as-customers’ while upholding academic standards and relationships; and
  • To achieve some notion of a personal, private or family life while fulfilling the demands of the role.

Turning back to the focus of Part II (above) what has in reality occurred in the UK in recent years is a REF-driven focus (bordering on obsession) with research as the primary component of an academic role. The TEF is therefore an attempt to rebalance higher education back towards teaching while both TEF and REF will inevitably increase the administrative burden on individuals and departments. It would at this point be possible to locate this shift in the context of Ernest Boyer’s ‘Taxonomy of Scholarly Endeavour’s’ but I have done this elsewhere (Flinders, 2017) and the real focus of this section is on professional splintering as both a gaming strategy and a personal coping strategy. What I mean by this splintering is that the notion of an ‘all rounder’ scholar who undertakes research, teaches and plays a leading role in the administration of either their department or their disciplinary learned society is eroding and is being replaced by an increasingly fragmented community of political scientists – the modern ‘specialist scholar’.

Traditionally British universities have maintained a broadly egalitarian approach whereby all staff are expected to undertake at least some element of teaching and administration. The exception to this was generally where staff had secured teaching ‘buy outs’ through external research grants but in the last two or three years a bifurcation between teaching-only and research-only staff is beginning to emerge. Between and betwixt these two extremes exists an increasingly large academic ‘precariat’ consisting generally of younger new entrants to the profession who are expected to accept either a succession of temporary (and generally teaching-focused) contracts or to undertake an even more precarious academic existence on the basis of a portfolio of fractional roles undertaken concurrently at several different universities. Escaping ‘the precariat’ revolves around securing tenure but even here a professional pathology exists in the form of a pressure to ‘publish or perish’ that inevitably incentivizes a combination of hyper-specialisation and self-plagiarism. This, in turn, does little to nurture intellectual ambition and even less in terms of building confidence amongst non-academic user-groups that political science has the capacity to respond to allegations of irrelevance. The flip-side is that exploring new approaches, developing new theories, demonstrating relevance or public value, investigating the nexus between disciplines, etc. – all of those main activities that funders, research-users and governments around the world prioritise – demand time and the acceptance of positive inefficiencies (e.g. risks that do not pay off, roads to relevance that turn out to be cul de sacs, etc.). The contemporary tragedy of political science – to paraphrase Ricci (above) – is double-edged: the young fresh minds with the most to offer are immediately squeezed into a system that could have been designed to squeeze-out ambition and creativity and incentivizes ‘playing safe’; while the profession as a whole offers little space for positive inefficiency, no matter how positive the returns might be.

4. Conclusions, and few suggestions

Those with an awareness of very recent shifts within British higher education might respond that my analysis is out-dated. ‘Doesn’t he know that ‘publish or perish’ has been replaced by ‘quality over quantity’? I hear them cry. This is certainly the new mantra amongst vice chancellors and deans but the reality beneath this rhetoric is a professional sphere in which very few academics are brave enough (or have the intellectual headspace) to step-off the publication production line. And yet at the other end of the spectrum it is possible to identify the recent emergence of a very small cadre of tenured ‘high impact’ academics who enjoy a visibility within the practitioner and media spheres. The ‘stretch’ or ‘span’ of an academic career has therefore widened significantly in response largely to the imposition of external audit regimes and higher expectations. The malleability of some institutions has reached breaking point and this is reflected in the manner in which some teaching focused universities have dropped out of the REF process and some research-focused universities are threatening to boycott the forthcoming TEF process (see Havergal, 2016). And yet my sense is that this fragmentation appears to be locking-in rather than challenging a number of pre-existing inequalities within the discipline. For example, the research professors and ‘high-impact’ professors generally make little contribution in the sphere of institutional or academic governance and undertake little (if any) teaching. They are also generally men.

The real tragedy of political science (or a central tragedy for political science) is that is has so far failed to acknowledge the existence of the politics and management of this expectations gap surrounding scholars, or to acknowledge its splintering dynamic which leads me to suggest that the future of political science depends upon the emergence of ‘a new politics of political science’ that seeks to control and manage external pressures – to somehow close the expectations gap – for the collective good of the discipline. This would involve a new professionalism that permeates down from national learned societies, professional associations and funders, through institutional units and to individual scholars. That is a new politics that is – quite simply – more aware of the external context in which sciences takes place and that balances internal expertise and external engagement. More specifically the nexus between academe and society must form the focus of greater attention and, as a result, the role of an academic is likely to change. As the Brexit debate in the UK illustrated, politicians will always ignore or seek to reinterpret research that does not suit their partisan needs but there is a far wider community of potential research users than the discipline generally recognises. The dominant perception of a clear qualitative distinction between ‘pure’ and ‘applied’ research will have to be re-cast in a more dynamic mode of understanding. More specifically, there will have to be some understanding of the manner in which ‘impact’ can actually underpin, nourish and nurture excellence in terms of both research and teaching. Once again, the ‘new politics’ or ‘new professionalism’ will have to understand the knowledge ecosystem in ways that have largely been forgotten but must now be rediscovered if the discipline is to prosper. The exact nature of this new disciplinary strategy will be for national associations and institutions to decide, but in terms of offering elements of this ‘new politics’ the following ideas are worthy of consideration.

Firstly, political science cannot and should not adopt a victim mentality but a more robust and confident professional persona. In this regard the role of the main learned societies is vital as the source of external promotional activities and more specifically as the driver of proactive knowledge-brokerage, knowledge-filtering and knowledge-framing activities. Put within the framework of Figure 1, the role of learned societies and professional associations has to support the discipline in terms of raising the lower bar of realistic capacity where possible while paying far more attention to their external/strategic role in actually managing the expectations of the public and policy makers vis-à-vis the upper bar (i.e. Option 3, above). Simply stated, learned societies and professional associations must take the lead in closing the expectations gap from above and below. In this regard relatively simple steps can yield significant returns. Of particular significance, for example, given the temporal misalignment between academe timescales and politics in practice is a clear approach to horizon-scanning so that translated packages of research can be prepared and delivered to research users (media, practitioners, etc.) at specific ‘windows of opportunity’ when the demand for such information will be high. Moreover, learned societies, in partnership with funders and research-users, should also take the lead in terms of innovating in relation to both training and bridging activities. Take, for example, the Political Studies Association of the United Kingdom’s annual Total Exposure competition.3 It would be almost impossible to design a more simple initiative: academics receive support, training and guidance on how to ‘pitch’ an idea to broadcasters based around translating their research into a documentary or series of documentaries for television or radio. Academics can submit ideas on their own or in small teams, inter-disciplinary ideas are encouraged and the overall emphasis is on creativity and intellectual energy. A panel of senior commissioning executives then sifts applications and selects twelve finalists who are then invited to London to make their pitches in person during a face-to-face sixty second slot in front of the broadcast specialists.

There is no ‘prize’. No broadcaster is ever going to guarantee to commission a project through an open competition. But what Total Exposure does achieve is an opportunity for academics to learn new translational skills and to expose themselves – personally and intellectually – to a new professional audience who approach the value of scholarship from a very different perspective. Three things are worth noting about Total Exposure. First, it has proved to be an incredible success. Of the twelve pitches shortlisted in 2016 nine received ‘call backs’ to discuss their ideas in more detail with commissioners and one pitch was taken straight into production (Cathy Gormley-Heenan’s documentary on ‘the politics of peace walls’ around the world); in 2017 eight of the nine short-listed pitches received call-backs and several look likely to move into production. Secondly, just like politics a lot of the real work takes place not within the sixty-second pitch or the subsequent discussion but in the coffee breaks and over lunch. The commissioners often have ideas for new programmes and are looking for new faces, new voices and new talents with the capacity to engage, inform and entertain in equal measure. Rejected pitches may well lead to unexpected opportunities at a later date. The final twist of Total Exposure takes us back to the issue of equality and diversity and flows into a set of debates concerning demographic change. Younger scholars, women and individuals from black or ethnic minority backgrounds have dominated the list of finalists. As such, the social composition of the short-listed candidates tends to be far more representative of society at large and therefore decidedly unrepresentative of the political science community in the UK. Total Exposure therefore not only takes the very best social and political science and translates it for dissemination through mass access broadcasting platforms but it also appears to have somehow short-circuited some of the traditional professional blockages that prevent equality of participation and opportunity.

Put slightly differently, projects such as Total Exposure, led by the national learned society, begin to add tone and texture, even substance, to a ‘new politics of political science’ that is founded on an understanding of the manner (1) the discipline has evolved to contain and sustain significant structural inequalities, (2) that these inequalities cannot be ignored and that (3) each of Boyer’s forms of scholarship are mutually supportive and combine to sustain a rich intellectual ecosystem.

A second element of this ‘new politics of political science’ might take this more ambitious, coherent and holistic approach one step further through a generational approach to student recruitment that moves the focus down the educational pipeline so that students in schools and colleges appreciate exactly what the study of politics involves and why it matters, its potential in both intellectual and vocational terms and the available professional career paths via higher education. This educational pipeline provides a critical tool through which to understand and address long-standing issues concerning diversity and inequality and – beyond this – to democratise the study of politics to exactly those sectors of society who appear to have become disenchanted. Scholars in the field of political (dis)engagement have for some years outlined a shift in modes of political expression and activity from traditional party-based, mass member, formalized, etc. [i.e. ‘old’ modes] towards more individualized, issue-based, direct, digital and informal ‘new’ modes. But political science has arguably failed to utilize these insights when it comes to proactively promoting or demonstrating the value of their discipline. School ‘outreach’ events therefore tend to continue to be held in the traditional institutions of politics – the city halls and parliaments – but rarely exhibit the creative dynamism that young people crave by ‘reaching-out’ within exactly those new political arenas, like music, film or literature festivals, where debates, discussion and recruitment takes place. Even the language of politics needs to be considered within this new politics of political science. ‘Outreach’ and ‘reaching-out’ arguably bring with them subtle but subliminal connotations, the former somewhat cold, formal, distant (exactly those characteristics that ‘disaffected democrats’ level at politics) the latter perhaps far warmer, friendly, engaging.

A third element is highly political and involves the colonisation of the broader research community in terms of places on the boards of research bodies, government advisory bodies, international non-governmental organisations, media organisations, etc. My sense is that other disciplines have been far more professional and ambitious in terms of monitoring when places on influential organisations are advertised and then encouraging (and supporting) members of their discipline to apply. This allows the discipline to be embedded and have tentacles far beyond the university sector and to have ambassadors in key posts. Once again, this regular vacancy monitoring and proactive encouragement is fairly low cost but potentially incredibly important for the external profile and visibility of a discipline. The targeting of professional appointments can also be built into a more ambitious equality and diversity agenda, while also being of value to the individual academic in terms of their ‘good citizenship’ requirements and the need for impact-related or research-related networks. (This targeted approach to recruitment also works in the opposite direction in the sense that professional associations and learned societies might also usefully include a number of non-academic research users on their boards.) What these three elements really point to is the manner in which the ‘scientific’ and the ‘political’ (or the ‘academic’ and the ‘public’) components are both mutually inter-dependent – almost positively parasitical in the sense that they feed upon each other – within a modern academic career where the professional responsibilities of academics to the public who fund their work are increasingly explicit. In this regard claims to be delivering more research of a higher quality will carry little weight if that research does not percolate through into the public sphere in accessible and purposeful ways. Without this ‘new politics’ political science will be politically disadvantaged (and therefore structurally disadvantaged in resource terms) vis-à-vis other disciplines in a climate of already shrinking resources. That really would be a tragedy.

Notes

1 See also: https://www.timeshighereducation.com/features/evolution-of-the-ref/2008100.article.

2 See, for example, Burnett, K. ‘Cash Starved Campuses must raise fees or drop standards’, Times Higher, 1 Sept. 2016.

3 See also: https://www.psa.ac.uk/totalexposure.

References

  • Almond, G. (1990) A Discipline Divided. New York: Sage.
  • Barrow, C. (2008) ‘The Intellectual Origins of New Political Science’, New Political Science, 30:2, 215-244.
  • Burnett, K. (2016) ‘Cast starved campuses must raise fees or drop standards’, Times Higher Education, 1 September.
  • Capano, G and Verzichelli, L. (2016) ‘Looking for Eclecticism? Structural and Contextual Factors Underlying Political Science’s Relevance Gap’, European Political Science, 1-22.
  • Capano, G and Verzichelli, L. (2010). ‘Good but not good enough: recent developments in political science in Italy’, European Political Science, 9:1, 102-107.
  • Easton, D (1969) ‘The New Revolution in Political Science’, APSR, 63:4, 1051-61.
  • Farr, J. (1988) ‘The History of Political Science’, American Journal of Political Science, 32:4, 1175-1195.
  • Flinders, M and Kelso, A. (2011) ‘Mind the Gap’, British Journal of Politics and International Relations, 13(2), 249-268.
  • Flinders, M. (2013) ‘The Tyranny of Relevance and the Art of Translation’, Political Studies Review, 11(2), 149-167.
  • Flinders, M and Dommett, K. (2013). ‘The Politics and Management of Public Expectations’, British Politics, 9:1, 29-50.
  • Flinders, M. (2013) ‘The Politics of Engaged Scholarship’, Policy & Politics, 41:4, 621-642.
  • Flinders, M Savigny, & K Awesti, A. (2016) ‘Pursuing the Diversity and Inclusion Agenda’, European Political Science.
  • Ginsberg, R. (1999) ‘Conceptualizing the European Union as an International Actor’, JCMS, 37:3, 429-454.
  • Hay, C. (2009) ‘Academic Political Science’, Political Quarterly, 80:4, 587.
  • Havergal, C. ‘Some Russell Group Universities ‘Could Opt Out of TEF’, Times Higher Education, 1 September.
  • Kinman, G and Jones, F. 2010. ‘‘Running Up the Down Escalator’ stressors and strains in UK academics’, Quality in Higher Education 9(1), 21-38.
  • King, G Schlozman, K and Nye, N. eds. (2009) The Future of Political Science, London: Routledge.
  • Pielke, R. (2003) The Honest Broker: Making Sense of Science in Policy and Politics, Cambridge: Cambridge University Press.
  • Schmitter, P. (2002) ‘Seven (disputable) theses concerning the future of ‘transatlanticised’ or ‘globalised’ political science’, European Political Science, 1:2, 23-39.
  • Toje, A. (2008) ‘The Consensus-Expectations Gap’, Security Dialogue, 39:1, 121-141.
  • Trent, J. (2011) ‘Should Political Science by more Relevant?’ European Political Science, 10, 191-209.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Basic HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS

%d bloggers like this: