Skip to main

Impact case study database

The impact case study database allows you to browse and search for impact case studies submitted to the REF 2021. Use the search and filters below to find the impact case studies you are looking for.
Waiting for server

Addressing the Harms of Computational Propaganda on Democracy

1. Summary of the impact

Howard’s pioneering research into online disinformation has focussed media and political attention on the rise of “computational propaganda”—disinformation and misinformation spread through social media. Howard and his ComProp team have not only demonstrated the need for social media firms and government regulators to address this serious issue but has also helped inform policy responses on an international scale. The UK government’s Department for Digital, Culture, Media and Sport (DCMS), the European Commission (EC), and the United States Senate Select Committee on Intelligence have all stated that Howard’s work has aided their understanding of online disinformation and computational propaganda, enabling them to identify and counter online disinformation, and engage social media firms in the exploration of regulatory options.

2. Underpinning research

(indicative maximum 500 words)

Philip Howard is a professor of sociology, information and international affairs and the Director of the Oxford Internet Institute at the University of Oxford. An internationally recognised authority on technology and politics, he leads a programme of research that investigates political communication online and the role of automation in the spread of “junk news”.

[R1] Computational tools now play an important political role in areas such as news consumption and issue awareness. Drawing on quantitative analysis of social media data and interviews with people who design and deploy political “bots” and disinformation campaigns, this global overview presents case studies from Russia, Ukraine, Canada, Poland, Taiwan, Brazil, Germany, the United States, and China. Howard’s team of authors find automated manipulation of public opinion to be on the rise worldwide, with advances in computing technology making this both more sophisticated and harder to track.

[R2] Howard has argued that bots are a new domain of political communication, with pervasive technology increasingly being used to direct public sentiment and manipulate opinion. This article provides a formal description of computational propaganda and defines “political bots” as automated scripts designed to manipulate public opinion. It shows how these automated bots can interfere with political communication by allowing surreptitious campaign coordination, illegally soliciting contributions and votes, and violating election rules.

[R3] To understand what social media users share during important political events, Howard’s team undertook real-time data collection of political news shared during the 2016 US election and 2018 State of the Union address. Analysing over 20,000,000 tweets through manual and semi-automated coding, they produce a grounded typology of what information users shared online and develop the concept of “junk news” to describe sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news. They find that users shared substantial amounts of junk news online, reflecting the influence of nonprofessional organizations and decline in influence of traditional gatekeepers of political communication, such as parties, the state, and policy experts.

[R4] Extending this method to recent elections in Europe found low to moderate levels of amplified traffic, suggesting limited effects on social media sharing—albeit with amplification growing substantially around elections. The share of political traffic driven by these “amplifier accounts” in Germany was low (7.4%), compared with France (4.6-11.4%) and the UK (16.5%). Most of the UK political content shared on Twitter came from professional news sources (48.8%) and rarely from junk news sources (10.3%).

[R5] This article advances the small body of knowledge on domestic automation and opinion manipulation in China and presents the first piece of research into Chinese automation and opinion manipulation abroad, based on analysis of 1.5 million comments on official political information posts on Weibo and 1.1 million tweets. Little evidence of automation was found on Weibo, but a large amount was found on Twitter—published in simplified Mandarin and driven by a small number of anti-Chinese-state voices, presumably aimed at diasporic Chinese and mainland users accessing blocked platforms.

[R6] In this monograph, Howard presents original evidence about how manipulation and amplification of disinformation is produced, managed, and circulated by political operatives and governments, and discusses the evidence of automated manipulation in the Brexit referendum and 2016 US Presidential election. Finally, he describes paths for both democratic intervention and future research in this space.

3. References to the research

[R1] Woolley, Samuel and Philip N. Howard eds, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. New York, NY: Oxford University Press, 2018. DOI:10.1093/oso/9780190931407.001.0001 [output type: B]

[R2] Philip N. Howard, Samuel Woolley & Ryan Calo (2018) Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration, Journal of Information Technology & Politics, 15:2, 81-93, DOI:  10.1080/19331681.2018.1448735 [output type: D]

[R3] Samantha Bradshaw, Philip N. Howard, Bence Kollanyi & Lisa-Maria Neudert (2019) Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018, Political Communication, DOI:  10.1080/10584609.2019.1663322 [output type: D]

[R4] Neudert, L. M., Howard, P., & Kollanyi, B. (2019). Sourcing and Automation of Political News and Information During Three European Elections. Social Media + SocietyDOI: 10.1177/2056305119863147 [output type: D]

[R5] Gillian Bolsover & Philip Howard (2019) Chinese computational propaganda: automation, algorithms and the manipulation of information about Chinese politics on Twitter and Weibo, Information, Communication & Society, 22:14, 2063-2080, DOI:  10.1080/1369118X.2018.1476576 [output type: D]

[R6] Howard, Philip N. Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. New Haven, CT: Yale University Press, 2020. [output type: A – available on request]

This body of research has been supported by a number of funding grants in which Phil Howard was the PI, including two from the European Research Council (COMPROP: EUR1,980,112, 2015-2020; Restoring Trust in Social Media Civic Engagement: EUR149,132, 2017-2018) and the National Science Foundation (The Production and Detection of Bots: USD218,825, 2014-2016)

4. Details of the impact

Professor Howard and the Computational Propaganda (ComProp) team’s research findings have informed and shaped policy on online disinformation in the UK, the EU and the US, and they have been recognised by policymakers on both sides of the Atlantic as ‘“pioneers” in the field of online disinformation’, [C1], having ‘ produced the first wave of real research on how authoritarian regimes interfere in the elections of democracies using social media[C2]. According to the Head of the European Political Strategy Centre, the European Commission’s in-house policy think tank, ‘Prof Howard was one of the first researchers to expose the pervasiveness of politically-motivated bots and fake news on social media with his work on Brexit and the 2016 elections in the United States’ [R2, R3].

Howard has engaged extensively with policymakers to highlight the causes and consequences of online disinformation identified in his research, and helped shape policy responses in the UK, EU and US.

UK Policy Impact:

In December 2017, the ComProp team provided evidence to the House of Commons Digital, Culture, Media and Sport Committee’s inquiry into ‘Disinformation and “fake news”’. This evidence, which outlined the way in which bots and algorithms are being manipulated by social media companies and political actors to spread misinformation and disinformation, along with references to ComProp research, was included in the inquiry’s interim [C3] and final report [C4]. Consequently, the evidence fed directly into recommendations in both reports that the government should impose effective regulation on social media providers and do more to tackle foreign political interference via social media platforms.

Acting on the recommendations of the report, the Department for Digital, Culture, Media and Sport (DCMS) outlined the government’s policy for future legislation with the publication of its Online Harms White Paper [C5] in April 2019. The white paper ruminates at length on ComProp research that demonstrates the scale of the problem of online disinformation and proposes a new regulatory framework for internet companies including a statutory duty of care, mandatory reporting and increased transparency for both regulators and independent researchers [C5, Box 12]. The Online Harms legislation, according to the government’s response to the consultation results on the white paper, ‘is a key legislative priority for this government’, with the government stating that though the COVID-19 pandemic had delayed its passage into law, it remained committed to introducing the Online Harms legislation ‘as soon as parliamentary time allows’ in September 2020 [C6, p.11].

Howard helped the DCMS to engage with technology firms by convening closed workshops in March 2018, February 2019, and February 2020 at Oxford. These brought policy makers from the DCMS together with senior staff from Facebook, Google, Microsoft and Twitter. According to the DCMS’s Director of Security and Online Harms, Howard’s research standing enabled him ‘to pull together the right group for difficult conversations about the behaviour of the firms and the regulatory options that are now on the table. Such engagement is particularly welcome in helping senior policy makers … to stay on top of the latest research and analysis on complex issues, and to test ideas and review the policy options’ [C2]. He further adds that ‘ Prof Howard has arguably done more than any other independent researcher to hold the attention of policy makers, journalists and the interested public on social media firms and malign influence within the UK’, and has ‘helped maintain the attention of policy makers and journalists on the nuances of the problem and has directly shaped how policy makers in the UK frame and respond to the problem’ [C2].

EU Policy Impact and Code of Practice:

According to the head of the European Commission’s (EC) European Political Strategy Centre (EPSC), Howard and his ComProp research team have provided the EC with ‘an intellectual framework and empirical basis that has […] proven valuable for policymaking’ and has enabled the Commission to understand the full severity of online disinformation in Europe [C1].

In 2017, Howard’s research team presented the project’s key research findings and recommendations to EU policymakers at two events. The first, in September 2017, was at a meeting of the Alliance of Liberals and Democrats for Europe (ALDE) in Brussels where Marietje Schaake, MEP, and David Kaye, UN Rapporteur for Freedom of Speech, provided commentary on ComProp’s research findings and recommendations [C7].The second event, in November 2017, was a multi-stakeholder conference on Fake News, where the European Commissioner for Digital Economy and Society, announced the formation of the EC’s ‘High-Level Expert Group on Fake News and online disinformation’. Following the conference, Howard was one of five international experts asked by the EPSC to contribute testimony to a hearing on ‘Preserving Democracy in the Digital Age’ in February 2018, supporting the work of the High-Level Group [C8].

In March 2018 the High-Level Group’s final report, entitled A Multi-dimensional Approach to Disinformation [C9], echoed many of the findings and recommendations that Howard had presented to the EPSC hearing. For example, Howard’s observation in his oral testimony that ‘politicians in the West’ were using communication strategies to spread disinformation to their voters as well as their Russian counterparts [C8] was reflected in High-Level Group’s report in the identification of political actors as purveyors of disinformation in both European and non-European governments and thereby a fundamental cause of online disinformation in the EU [C9, pp.11]. Two of Howard’s suggested solutions—that algorithmic checks should be introduced and that social media firms should share their data with researchers **[C8]**—were also incorporated into the report’s recommendations [C9]. They also recommended a Code of Practice with two out of ten key principles referencing the need for platforms to enable access to data for researchers [C9, pp. 32-33], which was implemented in September 2018 and has been signed by Google, Facebook, Twitter, and others [C10]. To facilitate the practical application of this mandate, Howard included senior EU officials along with DCMS staff (as described above) when he convened policy leadership from social media firms for closed sessions in Oxford, in March 2018, February 2019, and February 2020. At the first meeting there were animated discussions about what, if any, the EU should do to regulate the platforms, and the notion of a voluntary Code of Practice was debated. By the second and third meetings the Code [C9] was in place and discussion was about platform responses and their action and inaction around protecting the EU and UK elections. The EPSC confirms that ‘Prof Howard's convening power has made possible direct and frank exchanges with executives of the social media firms’ [C1].

US Senate Select Committee on Intelligence’s Russia Investigation:

In May 2017, Howard authored an op-ed piece in the Washington Post, arguing that the CEOs of the major US social media companies should be compelled to testify before Congress on Russia's use of their platforms to interfere in the 2016 election. This received significant media and public interest, which the chair and vice-chair of the US Senate Select Committee on Intelligence (SSCI) say triggered ‘a national conversation on this subject that culminated in the Committee's indeed calling these CEOs to testify in September 2017’ [C12].

Prior to this, according to the SSCI chairs, ‘Public knowledge about the use of automation, algorithms, and big-data analytics to manipulate public opinion in targeted ways was exceedingly limited…Insights specific to Russia’s use of these methodologies, or “computational propaganda”, were largely press-based and anecdotal’ [C12]. The SSCI asked Howard to act as a formal consultant for them, and he joined the Committee’s Technical Advisory Group in 2018, provided in-person briefings for senior staff, and aided the preparation of the Committee's inquiries [C12]. He was also asked by the Committee to testify in an open hearing on 1 August 2018 [C13]. The SSCI chairs say that Howard’s research has been ‘essential to the Committee's understanding of how Russia endeavoured to interfere in the 2016 US presidential election’ and its understanding of the ‘ role of social media in the execution of foreign influence operations…his insights have entrenched our resolve to find the appropriate policy response to this vexing concern[C11].

Using data provided by social media firms to the SSCI, in December 2018 Howard and social media analytics firm Graphika published the first major analysis of the activities of the Internet Research Agency (IRA), a group with links to the Kremlin and Russian intelligence agencies. The Committee’s subsequent report on Russian interference in the 2016 Presidential election directly references Howard’s research multiple times, including the evidence that Russia’s online campaign was amplified by the IRA’s production of content on social media platforms, citing Howard’s finding that ‘the activity on Twitter constitutes the IRA's first use of a social media platform to conduct information warfare against the United States’ [C12 p. 51]. The report to Congress recommended that ‘Information sharing between the social media companies and law enforcement must improve, and in both directions’, and that the scope of existing federal election laws should be extended to online media to ensure that ‘Americans know the sources of online political advertisements.’ [C12, p. 80]

Senator Mark Warner, the vice-chair of the Committee, is also a co-sponsor of the Honest Ads Act, a bill currently pending in the US Senate, which would amend the Federal Election Campaign Act in this way. The text of the bill cites ComProp research into the scope of Russian social media manipulation in the 2016 election. [C14]

5. Sources to corroborate the impact

C1: Factual statement/letter from Head of the European Political Strategy Centre, 19th July 2019.

C2: Factual statement/evidence letter from Director of Security and Online Harms, Department of Digital, Culture, Media and Sports, 23rd September 2019.

C3: DCMS Committee interim report “Disinformation and ‘fake news’”, 24th July 2018.

C4: DCMS Committee final report “Disinformation and ‘fake news’”, 14th February 2019.

C5: HM Government, “Online Harms White Paper”, April 2019. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

C6: Government response to the House of Lords Democracy and Digital Technologies Committee Report on Digital Technology and the Resurrection of Trust (September 2020). https://committees.parliament.uk/publications/2308/documents/22803/default/

C7: Video footage of ‘Protecting democracy in a post-truth era’, Alliance of Liberals and Democrats for Europe at the European Parliament in Brussel, 6th September 2017. https://www.marietjeschaake.eu/en/protecting-democracy-in-a-post-truth-era and https://www.youtube.com/watch?v=wT6R4u5cLJs

C8: Full transcript of High-Level Hearing: Preserving Democracy in the Digital Age, 22nd February 2018.

C9: “High-Level Hearing Preserving Democracy in the Digital Age”, expert group report on fake news and online disinformation, February 2018.

C10: European Commission, Code of Practice on Disinformation (news article), September 2018. https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

C11: Factual statement/letter from Chairman and Vice Chairman of the US Senate Select Committee on Intelligence, 27th June 2019.

C12: Report of the Select Committee on Intelligence, United States Senate, on Russian Active Measures: Campaigns and Interference in the 2016 US Election. Volume 2: Russia’s Use of Social Media with additional views, 116th Congress, 1st Session, Report 116-XX, October 2019

C13: Testimony of Philip N. Howard, Oxford University “Foreign Influence on Social Media Platforms: Perspectives from Third-party Social Media Experts” Senate Select Committee on Intelligence, Open Hearing, August 1, 2018. https://www.intelligence.senate.gov/sites/default/files/documents/os-phoward-080118.pdf

C14: S.1989 - Honest Ads Act, US Senate. https://www.congress.gov/bill/115th-congress/senate-bill/1989/text

Additional contextual information

Grant funding

Grant number Value of grant
767454 £128,562
648311 £1,706,993
8060 £156,303