Impact case study database
Informing understanding and the practice of election polling
1. Summary of the impact
Political polling plays a crucial role in shaping the narrative of election campaigns, influencing party strategies, broadcasting regulation, media coverage, and the vote choices of citizens. Public confidence in the accuracy of polls in the UK was damaged by their collective failure to predict the outcome of the 2015 general election, while the outcome of the 2016 US presidential election also prompted debate over polling accuracy. Research by Professor Will Jennings has made important contributions to understanding of the performance of election opinion polling in the UK, and internationally, and has influenced polling methodology, reporting of polls in the media, and the regulation of political polling in the UK. Specifically, it has led to changes in the rules of the British Polling Council (BPC) and the Market Research Society (MRS) and in methodological procedures used by UK polling firms; influenced the conclusions and recommendations of the House of Lords Select Committee on Political Polling and Digital Media; informed Ofcom’s regulation of election-related broadcast material; informed wider public understanding of election polling and trends in polling accuracy internationally, and facilitated media coverage of election polling in the UK, US, Spain and France.
2. Underpinning research
Professor Will Jennings has been working on survey methodology and opinion polling for over a decade. His research seeks to understand the methodological challenges faced in conducting surveys, sources of error in the polls, and how voters’ preferences change over time in different political and electoral systems. It also has investigated whether polling inaccuracy has increased compared to the past. The underpinning research falls into three strands:
a. Polling errors and methodologies in the UK
Jennings was a member of the independent inquiry instigated by the BPC and MRS to investigate the performance of the pre-election polls at the 2015 general election. The inquiry was chaired by Jennings’ long-time collaborator Professor Patrick Sturgis and comprised an in-depth analysis of the errors in the 2015 general election opinion polls, published in the form of the Polling Inquiry report [ 3.1], with key findings summarised in an article published in Journal of the Royal Statistical Society: Series A [ 3.2]. Jennings contributed widely to the underlying analysis and deliberations of the inquiry, and led in particular on the benchmarking of the 2015 polling error against the historical accuracy of polls in British general elections since 1945. The main conclusion of the research was that the errors were caused by unrepresentative sampling; the way that polling organisations recruited sample members resulted in systematic over-representation of Labour voters and under-representation of Conservative voters. The weighting and adjustment procedures applied to the raw data did little to mitigate this error. The research was able to rule out a range of other potential causes, including: turnout weighting; postal voting; overseas voting; and late swing. The 2016 report made a number of recommendations for changes to the rules of the BPC, for the conduct of opinion polls by opinion polling companies, and for how the results of opinion polls should be reported in the media. Jennings has subsequently, with Sturgis, undertaken research into the methodological challenges facing UK pollsters, such as the impact of turnout adjustments and debate over the estimation of turnout among younger voters in the 2017 general election [ 3.3].
b. Polling estimation
Most short-term fluctuations in opinion polling are attributable to random noise, or differences due to the particular methodological choices made by pollsters. In a longstanding collaboration (‘the Polling Observatory’), with Pickup, Ford, Fisher and Wlezien, Jennings has conducted analysis of the state of public opinion in the UK, mainly relating to voting intentions for general elections but also ahead of referendums on Scottish independence and UK membership of the EU. The ‘state-space’ time series method used to estimate voting intention is described in [ 3.4] and was used in production of election forecasts for the 2015 general election. The Polling Observatory team has produced private reports on party support for the broadcasting regulator Ofcom from March 2015 to present.
c. Polling accuracy
Through a longstanding collaboration with Wlezien, Jennings has undertaken extensive investigation of cross-national differences in how voting intentions change over the electoral cycle [ 3.5] and of historical trends in polling errors [ 3.6]. This research has been enabled by the collection by Jennings of an unprecedented dataset of over 30,000 opinion polls from 45 countries. It shows that polls converge on the final result earlier in parliamentary and party-centric systems. Most notably, it finds no evidence to support the claim that election polling has become more inaccurate over time, also highlighting how different factors are associated with higher or lower errors (e.g. party size, election type).
3. References to the research
3.1 Patrick Sturgis, Nick Baker, Mario Callegaro, Stephen Fisher, Jane Green, Will Jennings, Jouni Kuha, Ben Lauderdale, and Patten Smith. (2016). Report of the Inquiry into the 2015 British general election opinion polls. London: Market Research Society/British Polling Council. http://eprints.ncrm.ac.uk/3789
3.2 Patrick Sturgis, Jouni Kuha, Nick Baker, Mario Callegaro, Stephen Fisher, Jane Green, Will Jennings, Ben Lauderdale, and Patten Smith. (2018). ‘An assessment of the causes of the errors in the 2015 UK General Election opinion polls.’ Journal of the Royal Statistical Society: Series A 181(3): 757-781. https://doi.org/10.1111/rssa.12329
3.3 Patrick Sturgis and Will Jennings. 2020. ‘Was there a ‘Youthquake’ in the 2017 general election?’ Electoral Studies 64 (April). https://10.1016/j.electstud.2019.102065
3.4 Rob Ford, Will Jennings, Mark Pickup and Christopher Wlezien. (2016). ‘From Polls to Votes to Seats: Forecasting the 2015 British general election.’ Electoral Studies 41(1): 244-249. https://doi.org/10.1016/j.electstud.2015.11.013
3.5 Will Jennings and Christopher Wlezien. (2016). ‘The Timeline of Elections: A Comparative Perspective.’ American Journal of Political Science 60(1): 219-233. https://doi.org/10.1111/ajps.12189
3.6 Will Jennings and Christopher Wlezien, ‘Election Polling Errors Across Time and Space’. (2018). Nature: Human Behaviour 2: 276-283. https://doi.org/10.1038/s41562-018-0315-6
4. Details of the impact
a. Polling errors and methodologies in the UK
This strand of the research has had two related impacts: i) Impact on the conduct and reporting of opinion polling in the UK through changes to the rules of the BPC and to the methodology and procedures of UK polling organisations, and ii) Impact on public policy, in the form of recommendations by the House of Lords Select Committee on Political Polling and Digital Media for regulation of the polling industry.
i) The changes in the rules of the BPC were set out in a statement on 31 March 2016 which states, “the Council has agreed to implement immediately rule changes that will (i) require greater transparency about how polls have been weighted, (ii) specify what changes, if any, have been made since a company’s previous published poll in how the data have been weighted or otherwise adjusted, and (iii) place an obligation on members to supply to any inquiry or committee that has been established by the BPC the micro data set for any poll in which that inquiry or committee has an interest”. The BPC subsequently introduced a new requirement on its members to publish a statement of the level of uncertainty of poll estimates: “the new BPC rule has been introduced in response to recommendation 11 of the inquiry into the 2015 British General Election Polls that was chaired by Prof. Patrick Sturgis” (1 May 2018). [ 5.1]
In addition to these changes to the rules of the BPC, the inquiry also influenced the methodologies of individual polling organisations. In a report published on 19 September 2017 [ 5.2] the BPC summarises the changes that member organisations made to their procedures in response to the inquiry report. These include changes to sampling and weighting procedures, the adoption of model-based turnout adjustment procedures, and new methods of treating respondents who do not report a vote intention. Nick Moon, Secretary of the BPC, said: “The vast majority of the recommendations were both accepted and acted upon and that demonstrates both the seriousness that the polling industry put on it and also of the kind of quality they thought it had” [ 5.3].
As publicly acknowledged on their blog posts [ 5.4] the Sturgis Inquiry’s findings shaped the methodological strategies of pollsters YouGov and Survation, the latter subsequently being the most accurate polling company in 2017 with a polling error of less than 1%.
Interviews with pollsters following the 2019 general election linked the strong performance of the industry to improvements in sampling and weighting introduced in the wake of the Sturgis inquiry [ 5.5]. For example, Adam Drummond of Opinium, the most accurate pollster in the 2019 election, has said “The overall thinking behind the BPC [Sturgis] Report has enormously informed our broader approach.” Anthony Wells of YouGov observed “I’ve always thought it was a very useful report, and that core finding that it was the sampling which was the problem was correct and what people needed to address. The companies that did get it right were the companies who had gone back and addressed that problem.”
Additionally, the research has been reported on extensively in the national and international media [ 5.6]. A Guardian editorial described the Polling Inquiry Report as “the nearest thing to a definitive explanation”. The insights of the inquiry continue to inform the practice and understanding of polling in the UK today. Research by Jennings has more widely informed coverage of polling methodologies, and their vulnerabilities. This has contributed to media and public understanding of polling in the 2016 EU referendum and the 2017 and 2019 UK general elections – via a range of commentaries, interviews and references to research in media reports [ 5.6].
ii) The research impact on policymaking has come about primarily as a result of its influence on the conclusions and recommendations of the House of Lords Select Committee on Political Polling and Digital Media, to which Jennings gave oral and written evidence, in addition to an informal closed briefing. The resulting report published in April 2018 [ 5.7] made recommendations on the regulation of the polling industry including, notably, that a ban on the publication of opinion polls in the days leading up to an election should not be introduced. The written evidence given by Jennings on the measurement of polling accuracy was cited extensively in the report (para. 39), while it also drew on his evidence on the comparative performance of the UK industry internationally and that there was no global crisis in polling (also see Section iii). Overall, his evidence was cited nine times in the report.
b. Polling estimation
Jennings’ research with the ‘Polling Observatory’ has been used by Ofcom in its regulation of election-related broadcast material in the UK since 2015. Its estimates have been included in digests of evidence produced by Ofcom (March 2017, March 2018, February 2019, April 2019 and November 2019) and its reviews of the Ofcom list of major political parties (March 2015 and March 2016). These informed Ofcom’s assessment of current party support, as indicated by opinion polls. Adam Baxter, Principal in Standards and Audience Protection for Ofcom notes [ 5.8]:
“In carrying out its duties in the regulation of election-related broadcast material, Ofcom must rely on accurate summaries of evidence of current support, as shown in opinion polls in the four nations of the UK. Ofcom has therefore used the Polling Observatory’s aggregated summaries of the various polling companies’ Great Britain-wide opinion polls as a proxy for gauging levels of current support in England. The Polling Observatory figures have been an invaluable resource in enabling Ofcom to reach decision in the area of elections. Further, as an aid to broadcasters, Ofcom has included the Polling Observatory figures in the digests of evidence of past electoral support (i.e. election results) and evidence of current support (in the form of opinion polls) that we published ahead of the various elections that took place on: 4 May 2017; 8 June 2017; 3 May 2018; 2 May 2019; and 12 December 2019. We also included the Polling Observatory figures in the digests of evidence of past electoral support (i.e. election results) and evidence of current support (in the form of opinion polls) that we published ahead of the elections that were due to take place on 7 May 2020 and which were subsequently postponed…”
This research has thus informed the regulation of election broadcasting and official guidance provided to broadcasters. The Polling Observatory estimates were for a period published on the websites of The Daily Telegraph and The New Statesman, and additionally were featured in reporting in The Guardian, The Independent, The Times and other national media outlets [ 5.6].
c. Polling accuracy
Jennings’ research on cross-national and over-time trends in polling accuracy has informed understanding of the performance of election polling worldwide in both the media and the polling industry. Jennings’ study of election polling accuracy in 45 countries [ 3.6] was also cited in the House of Lords report (2018, para. 42) [ 5.7], and was discussed by the leading US survey organisation Pew Research Center (2018) in a feature on ‘why well-designed polls can be trusted’ and in its field guide to polling for the 2020 presidential election [ 5.9]. The analysis and data were used extensively by the industry Inquiry into the 2019 Australian federal election polls, enabling benchmarking of the polling error in Australia [ 5.10]. The research has also been used by the President of the World Association for Public Opinion Research (WAPOR), Professor Claire Durand, in assessment of performance of the industry [ 5.11].
The study findings were promoted by the leading US commentators on polling/elections [ 5.13], including Nate Silver, who wrote in a piece for ABC News’ FiveThirtyEight:
“Despite often inaccurate and innumerate criticism over how polling fared in events like Brexit, a recent, comprehensive study of polling accuracy by Professor Will Jennings of the University of Southampton and Professor Christopher Wlezien of the University of Texas at Austin found polling accuracy has been fairly consistent over the past several decades in a variety of democratic countries in Europe, Asia and the Americas. The media narrative that polling accuracy has taken a nosedive is mostly bullshit, in other words. Polls were never as good as the media assumed they were before 2016 — and they aren’t nearly as bad as the media seems to assume they are now. In reality, not that much has changed.”
The study was highlighted further by Harry Enten, Senior Writer and Analyst for CNN Politics, on the leading podcast Pod Save America (which averages more than 1.5m listeners an episode) and has been the subject of commentaries from survey professionals including Scott Keeter, Senior Survey Advisor for the Pew Research Center, and Scott Clement, Polling Director for the Washington Post [ 5.6].
The polling industry has noted the important counter to ‘prevailing narratives’ regarding polling accuracy; for example, Ben Page, Chief Executive of Ipsos MORI says the analysis “is a healthy corrective to suggestions that polls are getting less and less accurate” while Anthony Wells, YouGov describes the findings as “incredibly useful to us in the industry and … to the cause of good science and good governance” [ 5.5]. The article [ 3.6] has an Altmetric score of 1,317 (19 December 2020), placing it in the top 4,200 articles out of 16.4 million tracked by Altmetric in terms of attention via social media attention, news coverage, and other modes of engagement. Altmetric noted 99 news stories in 85 international news outlets, and a total of 1,229 tweets from 1,007 users with an upper bound of 13.0 million followers.
The data collected for this study has also been used in coverage of elections across Europe by media organisations such as The Economist (French presidential elections 2017, Brazilian presidential elections 2018, UK general election 2019) and El Pais (Spanish federal elections 2016) [5.6]. As such it has enabled evidence-based analysis of elections. In June 2017, Idrees Kahloon, Data journalist for The Economist noted [ 5.12]:
“Professor Jennings's work has been invaluable to our coverage of European elections in the past year. By our count, at least 10 articles would not have been possible without his exhaustive collection of international polls—and many more stories have cited his data in discussing historic polling error. Our French election model , which was, to our knowledge, the first of its kind put out by a major news organisation, produced probability estimates using Professor Jennings's data. Our coverage of the recent British election was also improved by these data: we showed how pollsters generally oversold Labour support, and how much the dramatic drop in Conservative vote-share compared with past elections. A long three-page story on polling we ran just last week was also helped greatly by discussions with Professor Jennings. And we look forward to doing similar work with the upcoming German elections. We're certainly proud of our coverage of polling, which we try to explain to our readers accurately and with statistical rigour. But we also know that this quality of coverage only comes with the helpful expertise that the professor has so kindly lent to us.”
G. Elliott Morris, also Data journalist for The Economist, describes the Jennings and Wlezien dataset as “indispensable” and “a staple of our commentary on elections in almost every country” through underpinning his work in forecasting national elections.
Jane Frost, Chief Executive of the Market Research Society (MRS), the UK’s leading professional body for market research, highlights the broad importance of this programme of research to the £4.8 billion sector [ 5.13]:
“The MRS strives to promote the belief that “Evidence Matters”. This is particularly the case in relation to the performance of polls and their potential impact in turbulent times. Only truly robust independent studies have any chance of moving a polarised debate forward. The Sturgis Inquiry and Jennings study of polling accuracy worldwide were widely accepted for their results and their independence and have been instrumental in moving the debate on polling onto more constructive shared ground. The Inquiry to which they contributed is widely cited by MRS in, inter alia, evidence to the House of Lords investigation into polling.”
5. Sources to corroborate the impact
5.1 British Polling Council press releases: 31 March 2016 http://www.britishpollingcouncil.org/2016/03/ and 1 May 2018 http://www.britishpollingcouncil.org/british-polling-council-introduces-new-rule-on-uncertainty-attached-to-polls/
5.2 British Polling Council (2017). ‘How Have The Polls Changed Since 2015?’ 26 May 2017. http://www.britishpollingcouncil.org/how-have-the-polls-changed-since-2015/
5.3 Transcript of interview with Nick Moon, Secretary of the British Polling Council, 12 July 2019. Contact details of interviewer supplied.
5.4 Pollster blog posts: https://yougov.co.uk/topics/politics/articles-reports/2016/05/06/election-polls-new-methods-working and https://www.survation.com/survation-most-accurate-pollster
5.5 Transcript of semi-structured interviews with pollsters YouGov, Opinium, Ipsos MORI, Deltapoll, June-July 2019. Contact details of interviewer supplied.
5.6 Media coverage of the Polling Inquiry, Polling Observatory and Polling Accuracy.
5.7 House of Lords Select Committee on Political Polling and Digital Media. (2018). The politics of polling. https://publications.parliament.uk/pa/ld201719/ldselect/ldppdm/106/106.pdf
5.8 Testimonial, Adam Baxter, Principal in Standards and Audience Protection for Ofcom, 4 August 2020.
5.9 Pew Research Center, ‘ *Can we still trust polls?*’14 May 2018 https://www.pewresearch.org/fact-tank/2018/05/14/can-we-still-trust-polls and ‘ A Field Guide to Polling: Election 2020 Edition’ 19 November 2019 https://www.pewresearch.org/methods/2019/11/19/a-field-guide-to-polling-election-2020-edition/
5.10 Testimonial, Darren Pennay, Chair of the Australian Polling Inquiry, 2 October 2020.
5.11 World Association of Public Opinion Research. (2017). ‘WAPOR submission to the House of Lords Select Committee on Political Polling and Digital Media.’ 30 August 2017. https://wapor.org/wp-content/uploads/WAPOR-House-of-Lords-Final-Version-08-30-17.pdf
5.12 Testimonials, Idrees Kahloon, 22 June 2017, and G. Elliott Morris, 5 January 2021, Data journalists, The Economist.
5.13 Testimonial, Jane Frost OBE, Chief Executive of the Market Research Society, 29 July 2020.