Experimental political science is now “hot” in our discipline (Morton, 2010). Indeed many indicators attest to a lively and well-established disciplinary sub-sector: formal graduate training courses (and syllabi);1 handbooks (Druckman, Green, Kuklinski and Lupia, 2011) and manuals (Morton and Williams, 2010); research centers;2 a professional journal;3 professional organizations;4 and infrastructural resources.5 All this corresponds to a steady increase in the number of articles published in professional journals that use and report experiments on a wide number of issues in the spheres of domestic and comparative politics and international relations.6 While in the early 1970s McConahay (1973: 361) indicated that manuscripts reporting experimental results were unfairly treated, fifty years later Druckman et al. (2006: 632-633) claim that experimental articles have a greater chance of being cited – a relevant, though imperfect, measure of attention – than non-experimental ones.
How good is this flurry of activities for the discipline? Where do we in Europe and in Italy stand as to these trends? Is experimental political science only a fad, to soon pass by from this side of the Atlantic? Three main points will be made here. First, the experimental move in political science is here for the long run and the betterment of the discipline. Second, Italian political science is lagging behind. Third, we have an interest in catching up.
Experimental political science is here to stay due to a fundamental (and silent) methodological revolution that is taking place in the discipline. Key to cementing its presence has been the progressive erosion of the clear-cut barrier between experimental, comparative and statistical designs erected in the 1970s. For many years, the prevailing mood in the discipline was of deep pessimism about the utility of experimental research in political science. Lijphart (1971)7 set the tone when he contrasted experiments, “the most nearly ideal method for scientific explanation,” with the comparative method, seen as “only a very imperfect substitute” of the former. While the first choice on Lijphart’s methodological wish list was experiments, he lamented that “unfortunately it can only rarely be used in political science because of practical and ethical impediments.” (Lijphart, 1971: 683-684). This view, however, was not universally held. At around the same period, Brody and Brownstein (1975) were optimistically arguing in favor of a more militant approach to experimentation in political science. In what is usually considered the first systematic review of experimental political science, Brody and Browstein (1975: 253) claimed that it “has a useful place in political research, and … it represents a powerful tool.”
With the benefit of hindsight, Brody and Browstein appear to have been closer to the mark than Lijphart. This is, I surmise, attributable to three converging, though quite different, developments that have made the distinctions among research designs less impenetrable and the borders more porous. First is an explicit attempt by political science to distill a unifying logic of research in political science (King, Keohane and Verba (KKV), 1994). Although severely criticized (see e.g. Brady and Collier, 2004 who still share KKV’s main scientific thrust), this attempt has socialized a new generation of political scientists to think in experimental terms even when their key interests are firmly grounded in observational designs.
Second is a thorough revision of several statistical assumptions underlying the linear model and its most popular tool, regression analysis, by philosophers and statisticians. The Holland-Rubin model (Holland, 1986) and the debate about causality in observational and experimental research (e.g. McKim and Turner, 1997), has contributed to making political scientists more alert to the implications of different models of causality.
Third, and connected to the first two, we have seen in the last two decades a determined – and by and large successful – attempt to ground case studies and comparative research designs on a firmer methodological base (see Ragin, 1987 and George and Bennet, 2005). The experimental design can now be seen as “a template for case study research” (Gerring and McDermott, 2007)8 and rules of scientific inference for experimental and statistical or observational designs are often discussed together.
While there is greater interaction between experimental and non-experimental, observational, research, the very nature of what defines the experimental has undergone significant change as well. In short, what is now an experiment – as seen from our discipline’s view point- is better described as a family resemblance concept. To appreciate this conceptual change, let us compare the hallmarks of an experiment with what now runs under this heading in political science. Specifically, there are three key elements of an experiment: (a) a comparison, usually between those who are exposed to the treatment of interests (the ‘test group’) and those who are not (the ‘control group’); (b) the random assignment of individuals, objects or things to one or the other group; and (c) the manipulation of the independent variable (the ‘treatment’) whose effects the scientific team want to study.
In practice, experimental political science has been extremely flexible in accommodating various violations of these requirements and adapting its design to the needs and problems typical of political science. Quasi-experiments and natural or field experiments especially convey this flexibility of design and logic. Quasi-experiments (Cook and Campbell, 1979) compare nonequivalent groups when random assignment is far from perfect or even possible. Natural experiments (Dunning, 2012) forego the manipulation of the treatment and instead look for naturally occurring random assignment.9 What both these methods have in common is their departure from the standard laboratory experiment with its controlled environment. In this way, they circumvent what was perceived as one of the main obstacles to the wider application of experiments – artificiality – that had deemed the design unpopular in political science. At the same time, our colleagues’ ingenuity and creativity have contributed to increasing the frequency of laboratory experiments and enlarging their scope of application to a variety of sectors in political science (for a review see Webster and Sell, 2007).
A third, this time technological, development that has sped up the diffusion of experimental political science is the introduction of CATI (and later CAWI) in survey research. The use of computerized-assisted telephone interviewing (and now web-based interviewing) has rendered experimental designs useful in the very area of political science in need of such a tool: the study of public opinion through surveys. A key innovation by Sniderman and his collaborators at Berkeley10 was “to combine the distinctive external validity advantages of representative public opinion survey with the decisive internal validity strengths of the fully randomized, multifaceted experiment.” (Sniderman and Grob, 1986: 377). This development has made population-based experiments (Mutz, 2011) one of the most popular applications in political science and a vital source of data on a wide variety of issues.
Where are we in Italy and in Europe in relation to recent developments? The answer is, well behind. In one of the first systematic attempts to offer a European perspective on experimental political science, Kittel, Luhan and Morton (2012) lament that articles using experimental design are still “a rare bird in European journals.” (Kittel, Luhan and Morton, 2012: 7).11 If in Europe experimental political science is in its infancy, in Italy we are still at the gestation stage. The Italian political science community has long espoused Lijphart’s dissatisfaction with experimental approaches. In the most influential political science introductory book in Italy, the Antologia di Scienza Politica, edited by Giovanni Sartori in 1970, Urbani opens the section on methodological issues by reiterating that “[f]or obvious reasons, the experimental method in political science can be used only in very rare circumstances, such as the study of small groups that can be observed only in almost exceptional circumstances.” (Urbani, 1970: 41). Not much has changed since then. The most recent review of methodological issues, published in 2014 to celebrate the last 40 years of Italian political science (Calise and Cartocci, 2013), has not a single reference to experimental applications. This is not surprising. There is simply nothing to report about. In the 40 years of the Rivista Italiana di Scienza Politica only a single article (written in 1972 by Calcagno and Sainz) has the word sperimentazione in its title.12
Change is, however, brewing under the surface. In the last few years, ITANES has embedded experimental manipulations into some of its questionnaires.13 The Laboratorio Analisi Politiche e Sociali of the University of Siena has started to include experimental manipulations and vignettes in surveys conducted for both private and public clients. For the past two years the graduate program in political science of SUM/University of Bologna and Siena has offered crash courses in experimental political science, inviting political scientists and economists to discuss their experiences and results.
Given the situation is slowly moving forward, I conclude by offering three reasons why young researchers – as well as those experienced – should seriously consider experimental methods. Experimental political science is sharper, simpler, and easier to analyze than other research designs. First, experimental designs help us shape our causal statement. What makes experiments theoretically rewarding is pinpointing our thinking on the precise nature of the causal relationship under test. The clarity and precision implied in setting up an experiment is useful to making our theoretical models sharper, a crucial condition for theoretical progress.
Second, experiments are cheaper to implement than other designs. I refer not only to readily available, free-of-charge resources such as Time-sharing Experiments for the Social Sciences (TESS),14 but also to the potential pool of interested students accessible through graduate and undergraduate courses. Experimental laboratories are now available in some Italian universities and present the opportunity to conduct experiments within a controlled environment. For the young graduate student looking for potentially interesting and promising venues for research, these laboratories offer a useful launching pad to investigate issues ranging from voting behavior and institutional cooperation to gaming and coordination.
Lastly, experiments are easier to analyze than many quantitative observational designs. That is not to say that all one needs, once a proper experimental design has been set up, is a comparison of means or proportions, but rather that given the array of methods and techniques one must master in order to publish papers in professional journals today, experimental political science can let you get away with less.
1 See some examples at this link.
2 See the Center for Experimental Social Science (C.E.S.S.) at New York University, that has organized an annual NYU-CESS experimental political science conference over the past seven years.
3 The forthcoming Journal of Experimental Political Science published by Cambridge University Press.
4 See the APSA experimental research section and section panel list online. The section also publishes a biannual newsletter, The Experimental Political Scientist.
5 See the Time-Sharing Experiments for the Social Science as well as the experimental political science labs at MIT (directed by Adam Berinsky), Northwestern University (directed by Druckman) and the Experimental Lab Consortium.
6 For statistics on the growth of experimental papers in the main political science journals see: McGraw and Hoekstra (1994); Morton and Williams (2008); Druckman, Green, Kuklinski and Lupia (2011: 4-5); Kittel, Luhan and Morton (2012).
7 Of course, Lijphart was not alone in his indictment of experimental design. Most of the comparativists, Sartori (1970) included, shared this pessimistic evaluation of its applicability.
8 Again, to show that the seeds of where we are now were planted well before, Sam Stouffer (1950), in his seminal paper on “Study Design” in the American Journal of Sociology, used the controlled experiment as a template for the future of political science, predicting that “we will see more of full experimental design in sociology and social psychology in the future than in the past.” (Stouffer, 1950: 358) While we had to wait 50 years or so, he was right.
9 Natural experiments relate to observational studies in that causes are randomly (or “as-if” randomly) assigned by nature and not by the experimenter to the test and control groups. On the other hand, they share with experimental settings the fact that the confounders are taken care of by the research design and not by statistical control.
10 See Sniderman and Grob (1996) for a short description of this development.
11 Kittel et al. (2012: 7-8) report that the first experimental panel at ECPR was set up at the 2009 General Conference in Potsdam, followed by another in 2011 at St Gallen Joint Sessions of Workshops. Only 13 experimental papers have been published in the most important European Journals between 2000 and 2011, most of them after 2007.
12 This result is based on a quick search of the entire dataset of issues of the Rivista Italiana di Scienza Politica scanning the keyword esperiment*, speriment* and experimen* in the title for the period 1970 to 2011. I thank Luca Verzichelli for making this dataset available to me.
13 I thank Paolo Bellucci for this information. See also Corbetta and Colloca (2014).
14 TESS uses a representative sample of adults in the United States using GfK (formerly Knowledge Networks) Internet survey platform. KN is one of the most respected internet samples available today. The principal investigators at TESS are currently Jeremy Freese and James Druckman of Northwestern University.
- Brady, H.E. and Collier, D. (eds.) (2004), Rethinking Social Inquiry. Diverse Tools, Shared Standards, Lahnam, MD: Rowman & Littlefield.
- Brody, R. A. and Brownstein, C. N. (1975), ‘Experimentation and Simulation’, in Greenstein F. I. and Polsby N. W. (eds.), Handbook of Political Science. Strategies of Inquiry, 7, Reading, MA: Addison-Wesley, 211-263.
- Calcagno, A.E. and Sainz, P.(1972), ‘Sperimentazione quantitativa e sistemi politici’, Rivista Italiana di Scienza Politica, 2(2): 265-302.
- Calise, M. and Cartocci, R. (2013), ‘Concetti e Metodi’, in Pasquino G., Regalia M., Valbruzzi M. (eds.), Quarant’anni di Scienza Politica in Italia, Bologna: Il Mulino, 35-47.
- Cook, T.D. and Campbell, D.T. (1979), Quasi-Experimentation. Design and Analysis Issues for Field Settings, Boston, MA: Houghton Mifflin.
- Corbetta, P. and Colloca, P. (2014), ‘Electoral Choice: Typology of Processes Underlying Voters Decisions’, Rivista Italiana di Scienza Politica, 44(1): 29-54.
- Druckman, J.N., Green, D.P., Kuklinski, J.H. and Lupia, A. (2006), ‘The Growth and Development of Experimental Research in Political Science’, American Political Science Review, 100(4): 627-635.
- Druckman, J.N., Green, D.P., Kuklinski, J.H. and Lupia, A. (2011), Cambridge Handbook of Experimental Political Science. New York, NY: Cambridge University Press.
- Dunning, T. (2012), Natural Experiments in the Social Sciences: A Design-Based Approach, Cambridge, UK: Cambridge University Press.
- George, A.L. and Bennet, A. (2005), Case Studies and Theory Development in the Social Sciences, Cambridge, MA: The MIT Press.
- Gerring, J. and McDermott, R. (2007), ‘An Experimental Template for Case Study Research’, American Journal of Political Science, 51(3): 688-701.
- Holland, P.W. (1986), ‘Statistics and Causal Inference’, Journal of the American Statistical Association, 81(396): 945-960.
- King G., Keohane, R.O. and Verba, S. (1994), Designing Social Inquiry. Scientific Inference in Qualitative Research, Princeton, NJ: Princeton University Press.
- Kittel, B., Luhan, W.L. and Morton, R.B. (2012), Experimental Political Science. Principles and Practices, London: Palgrave/MacMillan.
- Lijphart, A. (1971), ‘Comparative Politics and the Comparative Method’, American Political Science Review, 65(3): 682-693.
- McConahay, J.B. (1973), ‘Experimental Research’, in Knutson, J.N. (ed.), Handbook of Political Psychology, San Francsico, CA: Jossey-Bass, 356-382.
- McGraw, K.M. and Hoetskra, V.H. (1994), ‘Experimentation in Political Science: Historical and Future Directions’, in Delli Carpini, M.X., Huddy L. and Shapiro R.Y. (eds.), Research in Micropolitics. New Directions in Political Psychology, 4, Greenwich, CT: JAI Press, 3-30.
- McKim, V.R. and Turner, S.P. (eds.) (1997), Causality in Crisis. Statistical Methods and the Search for Causal Knowledge in the Social Sciences, Notre Dame, IN: University of Notre Dame Press.
- Morton R.B. and Williams K.C. (2010), Experimental Political Science and the Study of Causality. From Nature to the Lab, New York, NY: Cambridge University Press.
- Mutz, D.C. (2011), Population-based Survey Experiments, Princeton, NJ: Princeton University Press.
- Ragin, C.C. (1987), The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies, Berkeley, CA: University of California Press.
- Sartori, G. (1970), ‘Concept Misformation in Comparative Politics’, American Political Science Review, 64(4): 1033-1053.
- Sniderman, P.M. and Grob D.B. (1996), ‘Innovations in Experimental Design in Attitude Surveys’, Annual Review of Sociology, 22, 377-399.
- Stouffer, S.A. (1950), ‘Some Observations on Study Design’, American Journal of Sociology, 55(4): 355-361.
- Webster, M. and Sell, J. (eds.) (2007), Laboratory Experiments in the Social Sciences, Elsevier.
- Urbani G. (1970), ‘Metodi, Approcci e Teorie: Introduzione’, in Sartori, G. (ed.), Antologia di Scienza Politica, Bologna: Il Mulino, 31-53.