Skip to main

Impact case study database

The impact case study database allows you to browse and search for impact case studies submitted to the REF 2021. Use the search and filters below to find the impact case studies you are looking for.

Search and filter

Filter by

  • City, University of London
   None selected
  • 11 - Computer Science and Informatics
   None selected
   None selected
   None selected
   None selected
   None selected
   None selected
Waiting for server
Download currently selected sections for currently selected case studies (spreadsheet) (generating)
Download currently selected case study PDFs (zip) (generating)
Download tags for the currently selected case studies (spreadsheet) (generating)
Currently displaying text from case study section
Showing impact case studies 1 to 4 of 4
Submitting institution
City, University of London
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Societal
Is this case study continued from a case study submitted in 2014?
No

1. Summary of the impact

City’s partnership with BetBuddy has created a fully explainable analytics platform to help identify and reduce problem gambling. In 2017, the system was acquired by Playtech and this technology currently supports 9 gambling platforms used by approximately 3 million registered gamblers worldwide.

The research has helped the gambling industry meet increasingly stringent regulation, and enabled individuals to gamble more securely and responsibly online. The team are at the forefront of public engagement in responsible gambling and have attracted widespread media attention for their research. More recently the research partnership has evolved to explore how big data might help prevent money-laundering within the gambling industry.

2. Underpinning research

Rapid growth in global online gambling has seen a rise in vulnerable gamblers in many countries. A strategic partnership between City academics (Garcez, Weyde and Slabaugh) and UK Start-Up BetBuddy has developed a novel solution: explainable machine learning models capable of predicting problem gambling behaviour – a unique development within the industry.

Building on Garcez’ earlier work on the ‘explanation capability’ of neural networks [3.6], and drawing on the analysis of online casino gambling data [3.4], City’s former Research Centre for Machine Learning (ML) began a dynamic research partnership with BetBuddy in 2014 as part of Innovate UK/EPSRC funded research projects to better understand how AI might help in the prevention of problem gambling (see Section 3, Indicators of quality of the underpinning research).

Their first study analysed the performance of supervised machine learning models to predict online gambling ‘self-exclusion’ [3.1]. From a sample of 845 online gamblers, four ML models were evaluated empirically. The Random Forest technique was found to be the most effective method for prediction, achieving an accuracy improvement of 35% versus baseline estimates.

Further research began by testing industry requirements for potential models. This revealed the criticality of ‘explainable’ models, both to help the user interact with the gambler, and to recommend interventions. These insights were crucial: current machine learning methods may offer good prediction performance, but their effectiveness is limited by the machine’s failure to explain its decisions to users. [3.2]

In pursuit of explainable models, City’s research team (Garcez, Weyde and Slabaugh as academics and Manoel Franca, Research Associate on the project França) tested a variant of the knowledge extraction algorithm TREPAN which, when given a neural network trained on gambling data, was able to produce compact, human-readable logic rules. The results converted a 500+ parameter neural network model into a 9 route decision tree with only a 1% loss of model accuracy. This original research was the first to report an industrial-strength application of knowledge extraction from neural networks in gambling protection from harm. [3.2]

Additional research produced the first comparative evaluation of predictive performance and tree properties for extracted trees from neural networks and Random Forest models, which was also the first comparative evaluation of knowledge extraction for safer gambling [3.3]. The results confirmed that Random Forests performed better than neural networks on this type of data – with 87% accuracy – and that both methods outperformed a standard decision tree on cross-validated accuracy.

These findings underpin the current BetBuddy system: Random Forests are trained on historical data such as play intensity variation, frequency, deposits and withdrawals – in combination with an existing BetBuddy rule-based system – to predict whether a player should be classified as a “self-excluder”. Analytics are automatically fed back to the operator and player. Operators can then make interventions, such as tailored marketing strategies, while players receive auto-generated personalised communications and a series of choices from the operator’s existing player protection tools. The regulator can also access the explanations of the system for audit purposes.

Several MSc internship projects took place in collaboration with BetBuddy, including Sanjoy Sarkar, MSc in Data Science at City and co-author of some of the papers, who was later hired by BetBuddy. City also secured a PhD studentship, in collaboration with BetBuddy’s CEO Simo Dragicevic, which was 100% funded by industry. The partnership between City and BetBuddy has recently evolved to include a collaboration with Kindred Group, one of the largest online gambling groups, to explore using AI techniques to detect suspicious gambling and money laundering. The research collaboration also continues, with the most recent publication investigating the timely issue of fairness and gender bias in the use of AI for gambling in collaboration also with BetBuddy’s chief data scientist Percy [3.5].

3. References to the research

3.1 Percy, C., França, M., Dragičević, S., & d’Avila Garcez, A. (2016). Predicting online gambling self-exclusion: an analysis of the performance of supervised machine learning models. International Gambling Studies16(2), 193-210. DOI: https://doi.org/10.1080/14459795.2016.1151913

3.2 Percy, C., d’Avila Garcez, A. S., Dragičević, S., França, M. V., Slabaugh, G., & Weyde, T. (2016, August). The need for knowledge extraction: understanding harmful gambling behavior with neural networks. In Proceedings of the Twenty-second European Conference on Artificial Intelligence (pp. 974-981). IOS Press. DOI: https://doi.org/10.3233/978-1-61499-672-9-974. Permanent repository link: http://openaccess.city.ac.uk/id/eprint/16483/

3.3 Sarkar, S., Weyde, T., d’Avila Garcez, A., Slabaugh, G. G., Dragicevic, S., & Percy, C. (2016, December). Accuracy and interpretability trade-offs in machine learning applied to safer gambling. In CEUR Workshop Proceedings (Vol. 1773) at NIPS. CEUR Workshop Proceedings. Accessible at: http://ceur-ws.org/Vol-1773/CoCoNIPS_2016_paper10.pdf Permanent repository link: https://openaccess.city.ac.uk/id/eprint/16484

3.4 Dragičević, S., Percy, C., Kudic, A., & Parke, J. (2013). A Descriptive Analysis of Demographic and Behavioral Data from Internet Gamblers and Those Who Self-exclude from Online Gambling Platforms. Journal of Gambling Studies, 1-28. https://link.springer.com/article/10.1007%2Fs10899-013-9418-1

3.5 Percy, C., d’Avila Garcez, A., Dragicevic, S., Sarkar, S. (2020). Lessons Learned from Problem Gambling Classification: Indirect Discrimination and Algorithmic Fairness. AAAI Fall Symposium, Washington, USA, AAAI Press.

3.6 Garcez, A., Broda, K. and Gabbay, D. M. (2001). Symbolic knowledge extraction

from trained neural networks: A sound approach. Artificial Intelligence, 125(1-2), pp. 153-205

https://www.sciencedirect.com/science/article/pii/S0004370200000771?via%3Dihub Permanent repository link: https://openaccess.city.ac.uk/id/eprint/293

Indicators of quality of the underpinning research

Three outputs were published in prestigious academic journals which apply a rigorous peer-review process prior to acceptance of papers. Three others were presented at international conferences which also which apply a rigorous peer-review process prior to acceptance of papers.

The research was supported by two grants:

EPSRC grant EP/M50712X/1 (£104,938) led by City, University of London (Oct 14 - Dec 15) https://gtr.ukri.org/projects?ref=EP%2FM50712X%2F1

UK Research and Innovation grant 101928 (£174,365) led by BetBuddy with participation of City, University of London (Oct 14 - Jan 16) https://gtr.ukri.org/projects?ref=101928

4. Details of the impact

In effort to reduce gambling harms, all UK gambling providers are legally obliged to offer customers a self-exclusion option as one of preventative measures. City’s research has directly impacted BetBuddy's ability to provide operators with an effective predictor for self-exclusion and other signals that indicate problem gambling. This enables gambling operators, the main beneficiary of the BetBuddy system, to identify problem gambling early, which in turn currently enables approximately 3 million registered gamblers worldwide [5.1] to use online gambling platforms more securely and responsibly.

Practical contribution to responsible gambling efforts

City’s research directly led to the development of PowerCrunch, BetBuddy’s core data mining and machine learning platform. It is available either as Software as a Service (SaaS) or as a product that can be seamlessly integrated with player account management systems, content management systems, and eCRM systems.

BetBuddy’s CEO, Simo Dragicevic, explains, “ Together, we developed a system capable of identifying harmful play and assisting in reducing at-risk gambling” [5.1]. “ City has enabled us to build more robust and accurate prediction models and apply new, creative algorithms to gambling data. By applying their expertise in knowledge extraction techniques to 'black box' prediction models, clinicians, regulators, and industry can better understand how these models can predict behaviour and better protect consumers at risk of harm.” [5.2]

Since 2014, 300 000 registered users of the Ontario Lottery and Gaming Corporation (OLG), a holder of a government mandate to work towards a ‘Gold Standard’ in Responsible Gambling (RG), have been monitored using the BetBuddy system. Since partnering with BetBuddy, OLG has twice received the World Lottery Associations’ Best Overall Responsible Gambling Program award, most recently in 2018.

In qualitative research based on interviews with 150 OLG players, 9 out of 10 identified the BetBuddy play management tool as helpful in managing their personal gambling behaviours [5.3]. The tool is part of the OLG online gambling platform and it interacts with the players via targeted messaging and banners. John Wisternoff, OLG’s Vice President of iGaming, said: “ *It’s very clear that we do not diagnose players, but we can respond to the activity that occurs on our website (…) The Bet Buddy engine in the background is analyzing play, and our marketing department has created a series of banners that talk about our tools, player protections and other items.*” The website delivers those banners (messages) based on the information provided by the BetBuddy algorithms [5.4].

In 2017, BetBuddy was acquired by Playtech – the world’s largest provider of online gambling and sports betting software – for a total amount of €2.2 million [5.6]. In a move to affirm their continued commitment to responsible gambling, Playtech integrated BetBuddy into 9 platforms, including the Ontario Lottery and Gaming Operation (OLG) and BuzzBingo, and its solutions currently reach approximately 3 million registered gamblers [5.1].

Ian Ince, Head of Regulatory Affairs and Compliance at Playtech, explains [6]: “ BetBuddy is the leading company in this field and has a team that has focused exclusively on developing an industry-leading Responsible Gambling solution…This acquisition demonstrates our commitment to producing solutions and games that will enable Playtech to be the most responsive and responsible businesses in the industry."

BetBuddy won RegTech supplier of the year at the Gambling Compliance Awards 2018 [5.5], in recognition of its contribution to industry regulation and compliance, and was shortlisted at the Lloyds Bank National Business Awards 2019 for the IA solutions, and a runner up to the winner Darktrace [5.1]

BetBuddy’s technology effectively keeps Playtech at the forefront of the industry when it comes to promoting safer play. In 2020, Playtech’s BetBuddy team was selected to lead the UK Gambling Commission collaboration group working on developing the first Industry Code for Product Design [5.7]. “ The use of BetBuddy technology and expertise, resulting from the fundamental research at City, gave BetBuddy and Playtech the trust required to lead this important regulatory industry initiative.” [5.1]

Contributing to parliamentary debate & setting industry standards

In a 2013 parliamentary debate of the Gambling Bill, MP Clive Efford read evidence from Dr Sally Gainsbury, Director of the National Problem Gambling Clinic, in which she cited BetBuddy as an example of technology which can assist in reducing problem gambling [5.8].

The Responsible Gambling Strategy Board has also cited BetBuddy in their progress report as an example of a product used to identify harmful play [5.9].

The Gambling Commission’s Licencing Conditions and Codes of Practice require operators to identify “at risk customers”. Operators are also now choosing to reveal responsible gambling Key Performance Indicators in their annual reports. In 2018 the former Sports Minister Tracey Crouch said the Government welcomed steps taken by some operators to incorporate behavioural analytics, into their responsible gambling systems, while at the 2018 Gambling Commission’s Raising Standards Keynote Speech, Sarah Harrison, the UK's former regulator, commended the strategic importance of Playtech's acquisition of BetBuddy in meeting these obligations [5.10].

In a 2018 European Commission report evaluating exiting tools for enforcing online gambling rules and channelling demand towards controlled offers, Playtech’s acquisition of BetBuddy serves as an example of regulators and industry being proactive and investing in technologies that enable the identification of players at risk. [5.11]

Raising awareness and helping industry comply with standards

City's research has attracted widespread media attention and members of the team regularly contribute to debate on the role of technology in RG.

In 2016, City and BetBuddy delivered a Responsible Gambling Algorithms Roundtable with 15 participants drawn from industry, treatment providers and regulators. The importance of explainable AI models for RG, such as the BetBuddy system, was widely acknowledged: “ *Interpretability can be an advantage when providing treatment as the counsellor has specific and relevant behavioural indicators to discuss.*” said Dirk Hansen, CEO Gamcare.

BetBuddy’s CEO Dragecivic is also a board member of the Responsible Gambling Council, an independent non-profit organization that has led on the prevention of problem gambling around the world for over 35 years. Dragecivic has been invited to lead and contribute to several international events for raising standards in compliance with the current regulations.

Wider impact

City and BetBuddy have recently partnered with Kindred Plc to explore the use of AI to strengthen anti-money laundering decision-processes. They have produced an industry whitepaper of recommendations, as well as presenting early analytical research at the 2018 European Association for the Study of Gambling (EASG) conference. [5.12]

5. Sources to corroborate the impact

5.1 Playtech/BetBuddy testimonial from BetBuddy’s CEO

5.2 Engineering and Physical Sciences Research Council (EPSRC). 2015. Online gambling to get safer through better prediction of addiction. Public release: https://www.eurekalert.org/pub_releases/2015-10/eaps-ogt102315.php Accessed 08.12.2020.

5.3 BetBuddy, Case Study: Ontario Lottery and Gaming Corporation and Using Big Data (pdf) and Bigdata article Using Big Data Analytics to Fight Gambling Addiction (2016)

https://bigdataanalyticsnews.com/using-big-data-analytics-to-fight-gambling-addiction/ Accessed 08.12.2020.

5.4 John Wisternoff, OLG’s Vice President of iGaming, in Patricia McQueen, PlayOLG Launches in Ontario. Responsible gambling and player protections are at the core of OLG’s venture into

online gaming. Insights 2015 https://nasplmatrix.org/insights-files/pdf/2015MarchApril.pdf Accessed 08.12.2020.

5.5. Playtech Annual Report 2018, available online at: http://www.annualreports.com/HostedData/AnnualReports/PDF/LSE_PTEC_2018.pdf Accessed 08.12.2020.

5.6 Ian Ince in Playtech acquires BetBuddy, 2017, available at: https://www.playtech.com/news/playtech-acquires-betbuddy Accessed 08.12.2020.

5.7 Daniel O'Boyle in Playtech, G C chief McArthur defends track record, hails industry progress, 2020  https://www.igamingbusiness.com/news/gc-chief-mcarthur-defends-track-record-hails-industry-progress Accessed 08.12.2020.

5.8 Gambling (Licencing and Advertising) Bill, Public Bill Committee, Tuesday 19 November 2013 https://publications.parliament.uk/pa/cm201314/cmpublic/gambling/131119/pm/131119s01.htm Accessed 08.12.2020.

5.9 RGSB report https://www.rgsb.org.uk/PDF/RGSB-Progress-Report-2017-18.pdf Accessed 08.12.2020.

5.10 Sarah Harrison, 2017. Raising Standards keynote speech https://www.gamblingcommission.gov.uk/PDF/speeches/Raising-standards-keynote-speech-Sarah-Harrison-2017.pdf Accessed 08.12.2020.

5.11 European Commission report, 2018, Evaluation of Regulatory Tools for Enforcing Online Gambling Rules and Channelling Demand towards Controlled Offers https://www.azarplus.com/wp-content/uploads/2019/02/Estudio-CE.pdf Accessed 08.12.2020.

5.12 Industry Stakeholder Interview Whitepaper, Raising Standards in Compliance: Application of artificial intelligence to online gambling data to identify anomalous behaviours, 2018 https://www.playtech.com/playtech-protect/media/3114/whitepaper-aml-and-ai-2jul2018.pdf Accessed 08.12.2020.

All sources are also available as pdf files.

Submitting institution
City, University of London
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Societal
Is this case study continued from a case study submitted in 2014?
No

1. Summary of the impact

Research in Human-Computer Interaction (HCI) conducted by Dr Simone Stumpf and Dr Stephan Makri in collaboration with Oregon State University (USA) resulted in development of GenderMag, a systematic method to find and fix gender barriers in technology, leading to significant impact on existing technology, making it more inclusive and usable for everyone. Using the GM method, industry practitioners, such as Microsoft, can identify and address root causes of gender barriers and accomplish gender inclusive technology, benefiting organisations and end-users, for example (text removed for publication) website users. Through partnerships with organisations worldwide, public events and educational outreach, GenderMag has contributed to shaping industry standards, for example by contributing to the “Microsoft Inclusive Design Toolkit”, a resource that sets standards for designers and developers of digital products, and to changing the mindset about gender inclusion in technology. 23 HE institutions worldwide incorporated GenderMag into undergraduate and graduate programmes.

2. Underpinning research

There are vast gender imbalances in a range of different circumstances and domains, which affect education, business and participation in society [3.5]. Gender differences have also been found in technology design, and it has been shown that much of this technology is not designed to support diversity (Borkin et al., 2013; Fernandez et al., 2013; Hassell, 2015; Tan et al., 2003), even when it is claimed that technology is gender-neutral, i.e. ‘no gender is intentionally assigned to their [product’s] users’ (Williams, 2014). Furthermore, research by Burnett et al. (2005) demonstrated that commercial software tools are optimized based on male developer’s preferences, whilst software development teams remain predominantly male (80%) (National Centre for Women in Information Technology (NCWIT), 2020 https://www.ncwit.org/resources/ncwit-scorecard-status-women-computing-2020-update ),Footnote:

Software development team composition has not changed for the past ten years, since 2010. See National Center for Women in Information Technology. NCWIT scorecard: A report on the status of women in information technology 2010; http://www.ncwit.org/sites/default/files/resources/scorecard2010_printversion_web.pdf leading to gender bias in product development (Williams, 2014). All this results in technology that is not inclusive and less usable by everyone. In addition, the effects of such gender-bias technology can impede business success (Williams, 2014).

In 2015, City academics, Dr Simone Stumpf and Dr Stephan Makri, began to collaborate with Professor Margaret Burnett and her colleagues from Oregon State University (USA) and researchers from Clemson University (USA), the Mathworks (USA), and a researcher from Denmark to develop the Gender Inclusiveness Magnifier (GenderMag) method. Prior to this research, there has been no previous work considering how to support designers and developers to systematically find gender barriers in their technology products and services, to then address them and make improvements, and to include all users irrespective of gender. The research team developed a new HCI method comprised of a set of personas based on multi-disciplinary research findings, coupled with a task walkthrough technique, which they validated through a set of empirical studies [3.1]. Personas have long been used in HCI design, and so are familiar to designers and developers. They encapsulate descriptions and traits of archetypal users and thus ensure that these user groups are put in the centre of designing the technology. The researchers developed a set of 3 personas (Abi, Pat and Tim) which embed 5 facets relevant to technology use which have been shown to vary by gender:  information processing style, learning style for new technology, computer self-efficacy, attitude to risk and motivations. The Abi and Tim personas represent extreme opposites on the facet dimensions, for example, Abi has an extreme comprehensive information processing style while Tim has a selective information processing style; Abi is risk-averse while Tim has a high tolerance for risk. The Pat persona represents those people with facet dimensions in between those extremes. These personas are instrumental in bringing validated research to technology designers and developers, rather than relying on their stereotypes, to ensure that the technology is inclusive of all 3 personas.

A cognitive walkthrough is a well-established usability inspection technique, however, the research team adapted and extended this technique to deeply embed the personas. To conduct a GenderMag walkthrough, the researchers implemented a series of forms which designers and developers can use to step through a task, asking a series of questions which point to potential gender-inclusion issues. This new process method is available for download in a toolkit format, with printable material including instructions for use and customisation, printable personas, and printable walkthrough forms [3.2].

The research team also validated this new method through a series of empirical studies, which showed that this method can be easily employed by designers and developers, with no background in gender research, to identify potential gender-inclusion issues, and that those issues that they found are in fact real issues that affect technology.

In conducting their work, Drs Stumpf and Makri also realised that there is scant work in teaching designers and developers about gender inclusion. Therefore, they have developed teaching materials in support of GenderMag [3.3], which can be used by others to educate future HCI, UX and software professionals about gender issues in software design and how to find these. In these teaching materials, they conceptualise gender as a socially constructed notion, the harmful effects that gender stereotypes can have, and how gender impacts technology design and use. They motivate why gender needs to be considered in technology design and discuss a number of approaches to design more inclusively. The researchers have provided worked examples for applying GenderMag in a classroom setting.

Their work has since extended beyond gender to consider other user groups, such as people with cognitive or visual impairments, or low literacy or socio-economic status. The research team have devised InclusiveMag – an Inclusiveness Magnifier – built inductively by generalising principles and processes used in creating Gender Mag [3.4]. They have shown how the GenderMag method could be systematically extended to produce new sets of personas, and how to embed these in expert evaluations.

3. References to the research

3.1 Margaret Burnett, Simone Stumpf, Jamie Macbeth, Stephann Makri, Laura Beckwith, Irwin Kwan, Anicia Peters, William Jernigan. GenderMag: A Method for Evaluating Software's Gender Inclusiveness. 2016. Interacting with Computers, 28 (6). [Google Scholar citations: 92. IwC 911 PDF Downloads]. DOI: https://doi.org/10.1093/iwc/iwv046

3.2 Margaret Burnett, Simone Stumpf, Laura Beckwith, Anicia Peters. The GenderMag Kit: How to Use the GenderMag Method to Find Inclusiveness Issues through a Gender Lens. 2020. Latest version publicly available from http://gendermag.org Accessed 20.12.20.

3.3 Simone Stumpf. Gender Issues in Inclusive Design. 2017. Lecture notes publicly available from https://sites.google.com/site/gendermagteach/ Accessed 20.12.20.

3.4 Mendez, C., Letaw, L., Burnett, M., Stumpf, S., Sarma, A. and Hilderbrand, C. From GenderMag to InclusiveMag: An Inclusive Design Meta-Method. 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 97-106. DOI: 10.1109/vlhcc.2019.8818889

3.5 Simone Stumpf, Anicia Peters, Shaowen Bardzell, Margaret Burnett, Daniela Busse, Jessica Cauchard, and Elizabeth Churchill. Gender-Inclusive HCI Research and Design: A Conceptual Review. Foundations and Trends® in Human-Computer Interaction 13, 1 (2020), 1–69. DOI: https://doi.org/10.1561/1100000056

4. Details of the impact

GenderMag allows ordinary practitioners, with no background in gender research, to identify which aspects of their software have gender-inclusiveness issues, in order to resolve them. The GenderMag methods are research-based, empirically shown to work, and freely available to the public. Their implementation has been leading to improved products and process in commercial and public organisations. GenderMag method helps to shape industry standards and has been shaping attitudes and rising awareness of gender inclusion issues via series of training workshops and via inclusion of GenderMag in 23 HE programmes worldwide.

Improved products and processes in commercial and public organisations

Since 2015, the GenderMag (GM) methods have been implemented into the design and development processes by Microsoft (US) [5.2], [5.3], (text removed for publication) (UK) [5.4], Greenstone Digital Library (NZ) [5.6], HazAdapt (US) [E5], and at least 4 other organisations [5.1] to improve gender-inclusiveness of products and services used by millions of end-users worldwide.

Twelve teams at Microsoft have implemented the GM methods and using GM successfully improved the inclusiveness of Microsoft software [5.2]. Issues identified using GM mirrored findings involving real users. Kat Holmes, a former Principal Director of Inclusive Design at Microsoft, emphasises the unique value of the method [5.3]: “A product leader was concerned that there were far fewer women using their product than they expected. [GenderMag] helped the Microsoft research team reframe the problem and ensure that the product did not favor a particular learning style. They restructured their research to recruit people by learning style and interviewed people from multiple genders, including transgender participants.”

(text removed for publication)

HazAdapt (Oregon, US), whose customers include Oregon State University Emergency Management and local emergency authorities in Oregon (US), integrated GM into the development of the emergency platform that compliments calling the emergency line number to communicate with authorities. Ginny Katz, CEO, explains that “in the initial design process of our HazAdapt app, we placed a great deal of focus on the first few minutes and hours of an emergency (…) GenderMag was an essential tool for us to get this critical user story right” [5.5].

Greenstone Digital Library (GDL) is a suite of open-source, multi-lingual software for building and distributing digital library collections, produced by the University of Waikato, and developed and distributed in cooperation with UNESCO. GDL is currently used in over 70 different countries worldwide, with downloads of 4,500 times a month. An evaluation of the GDL interface using GenderMag uncovered three major gender barriers that could hinder GDL uptake [5.6].

As of 20 December 2020, the GM toolkit has been downloaded 1185 times by 353 unique organisations [5.7].

Shaping industry standards

GenderMag research has made a significant contribution to the development of the “Microsoft Inclusive Design Toolkit”, a resource that sets standards for designers and developers of digital products [5.8]. Further, the GM method has been included in recommendations for “tools for making human-computer interaction research gender aware and gender inclusive” in a report from the EU Gender Equality in Engineering through Communication and Commitment (GEECCO) project [5.9].

In addition, GenderMag can be used to get the award of the iGIANT Seal of Approval (https://www.igiant.org/sea\) to demonstrate that a company has integrated gender inclusivity into their operations. Virginia Katz, CEO at HazAdapt, explains “in September 2019, GenderMag also helped us to get awarded the iGIANT Seal of Approval. (…) Our goal is to continually lead in the field of emergency communication technology as the first and highest rated inclusive option as we are now. Being certified inclusive is an attractive aspect that helps public safety and emergency management entities showcase their initiatives to become more inclusive and engaging with their diverse public.” [5.5]

Shaping attitudes and rising awareness of gender inclusion issues

Through a series of workshops, Dr Stumpf and Professor Burnett have reached out to software professionals worldwide. A GenderMag webinar delivered by Dr Stumpf in May 2020 was attended by professionals from 20 UK-based companies and 2 companies in the US. The post-event feedback showed that attendees’ awareness of gender inclusion issues has improved. This session led to establishing a partnership with the (text removed for publication) [5.4].

Incorporation of GM into higher education teaching programs has wider impact on raising awareness among future STEM professionals. GenderMag has been taught in 23 HE programmes worldwide, including Cornell University, USA; Harvard University, USA and University of Edinburgh, UK (https://sites.google.com/site/gendermagteach/home/where\-is\-gendermag\-taught\). GM has become a desirable specialism for user researchers. Competence in GM was included in the person specification for a post of Senior User Researcher at Bloomberg, New York, US [5.10].

5. Sources to corroborate the impact

5.1 Margaret Burnett, Anicia Peters, Charles Hill, and Noha Elarief. 2016. Finding Gender-Inclusiveness Software Issues with GenderMag: A Field Investigation. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2586–2598. DOI: https://doi.org/10.1145/2858036.2858274

5.2 Mihaela Vorvoreanu, Lingyi Zhang, Yun-Han Huang, Claudia Hilderbrand, Zoe Steine-Hanson, Margaret Burnett. 2019. From Gender Biases to Gender-Inclusive Design: An Empirical Investigation, In ACM Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4-9, 2019, Glasgow, Scotland, UK. ACM, New York, NY, USA.

5.3 Kat Holmes, (2020) Mismatch: How Inclusion Shapes Design (Simplicity: Design, Technology, Business, Life). The MIT Press.

5.4 (text removed for publication)

5.5 Letter of Support, CEO at HazAdapt.

5.6 Sally Jo Cunningham, Annika Hinze, and David M. Nichols. 2016. Supporting Gender-Neutral Digital Library Creation: A Case Study Using the GenderMag Toolkit. In Digital Libraries: Knowledge, Information, and Data in an Open Access Society (Lecture Notes in Computer Science), 45–50.

5.7 GenderMag google form stats.

5.8 Microsoft Inclusive Design Kit (2018) available online at https://www.microsoft.com/design/assets/inclusive/InclusiveDesign_DesigningForGuidance.pdf Accessed 20.12.20.

5.9 Sabrina Burtscher (2019) Literature Review: Gender Research in Human Computer Interaction, a report from the European Union project GEECCO: Gender Equality in Engineering through Communication and Commitment, available at http://www.geecco-project.eu/fileadmin/t/geecco/Literatur/neu/literature_review_KORR_07012020.pdf Accessed 20.12.20.

5.10 Bloomberg Job Advertising, Listing knowledge of GenderMag as part of the requirements.

Submitting institution
City, University of London
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Technological
Is this case study continued from a case study submitted in 2014?
Yes

1. Summary of the impact

A method for demonstrating that the risk in a computer-based system is acceptably low, structured through “claim-argument-evidence” ("CAE") links and supported by quantitative models, is widely and increasingly adopted by industry and regulators in the UK and worldwide.

This method has originated from research conducted in the Centre for Software Reliability (CSR). It requires explicit arguments linking evidence to the claims made about, e.g., safety and security; it encourages rigour and the use of analytical probabilistic models.

The impact through use in industry, listed in the REF2014 case, has continued and has increased with new adopters and adoption of an extension to the method. New beneficiaries include companies and regulators from railway, energy and autonomous vehicles.

The research has informed safety policies for complex critical infrastructures and contributed to new standard and guidance documents worldwide.

2. Underpinning research

The underpinning research spans several decades. “Assurance cases" extend the approach of safety cases, well-structured set of documents, to demonstrate that the risk posed by critical systems are acceptably low. Assurance cases have been widely adopted by industry and regulators, in the U.K. and worldwide.

Research conducted in CSR since 2014 continued the directions in research, which led to the previously reported impact in REF 2014, with several significant extensions of the method:

  • support for users of assurance cases to better structure them and to make them more trustworthy by incorporating probabilistic models. The advances concern:

  • improving rigour by defining semantics for fragments of CAE arguments ("CAE Blocks"); improving usability of the CAE framework by introducing various guidance elements, e.g., the “helping hand” visual aid [3.6];

  • formulating explicitly the need for validation of system dependability in the presence of uncertainty [3.1] and demonstrating the benefits that a probabilistic model-based assessment can add to assurance cases. The research team reduced the difficulty of applying model-based assurance to very complex systems, such as interdependent critical infrastructures. Incremental refinement [3.3] allows assessors to progress from a very abstract model of the complex system to a high-fidelity model, as required by stakeholders.

  • extensions to security and to security-safety “co-engineering”. In case studies in industrial automation, power grids and medical devices (in projects SeSaMo, AQUAS, RITICS-CEDRICS, I3S), the researchers developed models of the analysed systems which capture the essential aspects of an assessment captured in an assurance case. For instance, in assessing safety under cyber-attacks, it is essential to model credibly how successful attacks to the computer systems can degrade safety of the controlled engineered system [3.5].

  • more recently the research team addressed the assurance gaps in critical applications of machine learning and artificial intelligence, with case studies from autonomous vehicles (studied in the recently completed TIGARS and the ongoing ICRI-SAVe projects);

  • The team also provided more supportive tools for probabilistic modelling, especially of large, complex systems. Their PIA-FARA tool [3.3] supports "what if" analyses of accident/intrusion propagation scenarios in complex infrastructure, and integration of analysis results into the CAE framework (e.g., in RITICS-CEDRICS project);

  • The research team demonstrated potential pitfalls in extending the use of "fault injection" (a well-established technique for probing resilience mechanisms, e.g., IEC 61508 and ISO 26262) as some do to quantifying a system's resilience against design faults. A widely recognised problem is making injected faults "realistic"; their modelling [3.4] demonstrated more serious issues. The LoS from Intel Labs [5.2] identifies the practical impact of this insight for their work on building dependable autonomous vehicles.

  • Further work on probabilistic aspects of confidence (all projects above). An important, ongoing line of research concerns “conservative” Bayesian assessment, e.g. [3.2]. Bayesian methods bring advantages, recognised by some regulators, but their complexity invites shortcuts that undermine rigour and hence safety. The approach by the CSR team simplifies rigorous application while guaranteeing against over-optimism. This approach is now being applied to autonomous vehicles, to help translate experience of safe operation into the level of confidence that it supports in future safety.

3. References to the research

The research outputs on which this impact case is based have been published in selective peer-reviewed forums - high impact factor technical journals [3.1-3.3], in the proceedings of the prestigious International Symposium on Software Reliability Engineering [3.4, 3.5], one of the few conferences using both double-blind reviews and review moderation by senior members of the PC. Article [3.6] is published in IEEE Software, which reaches a very wide audience of practitioners.

A more complete list of publications related to the impact case can be seen at: https://researchcentres.city.ac.uk/software-reliability/research/REF2021/_nocache.

3.1 Bishop P.G., et al., Towards a Formalism for Conservative Claims about the Dependability of Software-Based Systems. IEEE Transactions on Software Engineering, 2011. 37(5): p.708-717.

3.2 B. Littlewood and A. A. Povyakalo, Conservative reasoning about epistemic uncertainty for the probability of failure on demand of a 1-out-of-2 software-based system in which one channel is “possibly perfect IEEE Transactions on Software Engineering, 2013. 39(11): p.1521-1530.

3.3 R. E. Bloomfield, et al., Preliminary interdependency analysis: An approach to support critical-infrastructure risk-assessment. Reliability Engineering & System Safety, 2017. 167: p.198-217.

3.4 P. Popov and L. Strigini, Assessing Asymmetric Fault-Tolerant Software, in IEEE 21st International Symposium on Software Reliability Engineering. 2010, IEEE: San Jose, CA, USA, p.41-50.

3.5 P. Popov, Models of reliability of fault-tolerant software under cyber-attacks in The 28th IEEE International Symposium on Software Reliability Engineering (ISSRE'2017). 2017, IEEE: Toulouse, France. p.228-239.

3.6 R. Bloomfield and K. Netkachova, Building Blocks for Assurance Cases, in IEEE International Symposium on Software Reliability Engineering 2014, IEEE: Naples, Italy. p.186-191.

(also: K. Netkachova and R. E. Bloomfield, Security-Informed Safety. IEEE Computer, 2016. 49 (6): p.98-102.)

4. Details of the impact

Failure of critical computer systems could result in death, injury, financial loss and damage to the environment. “Assurance cases”, well-structured set of documents, to demonstrate that the risk posed by a critical system is acceptably low, have been widely adopted by industry and regulators, in the U.K. and worldwide. Assurance cases require explicit arguments linking evidence to the claims made (or goals pursued) about e.g., safety and security. The approach to assurance via assurance cases was developed over many years with essential contributions from City staff, Prof. Robin Bloomfield and Prof. Peter Bishop, both part-time professors at City, University of London and leading personnel at Adelard LLPFootnote:

Robin Bloomfield is a founding partner at Adelard LLP. Peter Bishop is the Chief Scientist at Adelard LLP. .

This impact case is about a specific form of assurance cases with the following distinct characteristics developed at City, University of London:

  • Assurance cases are built using CAE, recently extended with CAE blocks [5.4];

  • Assurance cases rely not only on informal reasoning, e.g., based on expert judgement, but also on the rigour of models suitable for quantitative risk assessment.

The CAE blocks make the construction of assurance cases easier for practitioners, leading to a wider adoption, and the assurance cases themselves become more expressive and clearer.

Quantitative models help an assessor to decide on the claims or serve as evidence supporting or refuting the claim, especially in those cases where direct empirical evidence is difficult to obtain. Some claims may be ruled out based on results obtained with models. An example of such a claim is “failures of the versions in multi-version software are independent”, which City academics’ probabilistic models have demonstrated not to be credible.

The quantitative models the research team has used range in complexity: from simplified probabilistic models, suitable as a risk-communication tool to high fidelity - typically a hybrid of probabilistic and deterministic - models of complex cyber-physical system such as models of critical infrastructures. The simplified models are useful to make the stakeholders’ engagement easier by hiding the overwhelming system complexity. High fidelity models, instead, enhance the ability of experts to make well-informed decisions in cases where the expert judgements about how good the system is are hard due to system complexity.

Impact includes:

  • Reduced risk of harm from malfunctioning or intentional subversion of critical systems (e.g., nuclear, transportation, power supply, defence, medical) through application of a well-structured evidence-based argument aided by models.

  • Improved confidence in assurance: rather than depending on expert judgement or informal reasoning, with attendant significant uncertainty, by using modelling we can narrow this uncertainty or articulate its causes and implications.

  • Better understanding of future widely deployed systems, e.g., connected and autonomous systems. We discover that in this era of ubiquitous use of machine learning and artificial intelligence, our approach typically leads to significant savings not only in research and development but also in setting directions for the development of such systems. This advantage comes from rigour in modelling new technical systems, and thus ability to clearly articulate doubts about unsubstantiated claims, (due to lack of awareness by newcomers dealing with assurance).

4.1. Impact via Adelard LLP.

Major impact is achieved through the long-term collaboration with Adelard LLP.

  • The CAE Blocks framework is a way of structuring arguments:

  • It is a core part of the “IAEA Software Dependability Assessment guideline” [5.3], released in 2018, affecting nuclear safety worldwide.

  • The UK CPNI is expected to publish later in 2021 examples of security informed safety cases based on CAE Blocks.

  • The CAE approach is supported by Adelard’s commercial tool ASCE. According to the vendor, over 300 organisations are using ASCE worldwide, at least 50% of them use the CAE approach.

  • CAE has been used in Adelard LLP on their projects on assessing security informed safety of industrial systems and in the development of codes of practice for security informed safety:

  • For the rail industry and for connected autonomous vehicles [5.4].

  • In developing a regulatory cyber-maturity model for air traffic management for the Civil Aviation Authority (CAA).

  • In research conducted by Adelard funded by Assuring Autonomy International Programme (AAIP), a partnership between the Lloyd’s Register Foundation and the University of York, and the UK Department for Transport (DfT) on projects on autonomous systems and in the TIGARS project. This has led to “Safety case Templates for Autonomous systems” [5.6], and a new approach to assurance dubbed “Assurance 2.0 Manifesto” [5.7]. Assurance 2.0 is the basis for a project within the DARPA ARCOS program on automated certification ( https://www.darpa.mil/program/automated-rapid-certification-of-software).

  • City's stochastic modelling approach and tool, PIA-FARA, supporting the application of CAE to complex systems, such as critical infrastructures, has informed the work of Adelard with the National Cyber-Security Centre (NCSC) on software tools.

  • From 2019 Adelard have been training a multi-disciplinary team of managers, engineers from chemical process Control and Instrumentation (C&I) from a major hazards site in using CAE Blocks and elements of Assurance 2.0, with over 100 completing the course to date. Adelard have a long-term project to follow up and support.

4.2. Other impact

Impact was also achieved via other partnerships, e.g.:

  • Our case is endorsed by Radiy, a major supplier of C&I for the nuclear industry, with more than 70 installations worldwide including safety protection systems for nuclear plants. The LoS, by senior executives of the company, acknowledges the impact of City's work, especially the modelling work, on Radiy’s operation, including on the strategic decision to adopt design diversity in their portfolio of FPGA based products [5.1].

  • The approach to rigorous assurance supported by quantitative models has been adopted by new actors (companies/regulators): Railway, Energy, Autonomous Vehicles. The City team was invited to join the Intel Collaborative Research Institute on Safety Assurance of Autonomous Cars (ICRI – SAVe), a recognition of the impact of the prior research conducted by the City team. A statement from the Co-Director of ICRI - SAVe, from Intel-Labs, Germany, acknowledges the impact of our work on assurance cases enhanced with rigorous modelling work on the current global effort on safety assurance of autonomous vehicles [5.2].

  • Impact has been achieved in informing policies, e.g., of how models can be used to increase confidence in assuring resilience of complex interconnected critical infrastructures. The work we published [3.3] has informed to some extent the recent report by the Royal Academy of Engineering [5.5], especially in the part related to interdependencies. Bloomfield was a reviewer of [5.5].

  • Finally, impact has been achieved via inclusion of outputs from our research in new standard/guidance documents, both international and national:

  • Dependability Assessment of Software for Safety Instrumentation and Control Systems at Nuclear Power Plant, IAEA Nuclear Energy Series [5.3]. This document impacts the nuclear industry worldwide setting guidelines for risk limitation based on the CAE blocks;

  • PAS 11281: 2018 ("Connected automotive ecosystems – Impact of security on safety – Code of practice", sponsored by CPNI, 2018) [5.4]. This guidance document impacts the UK industry working on connected automotive systems.

  • Code of Practice: Cyber Security and Safety, IET No. 211014, sponsored by the National Cyber Security Centre (NCSC), [5.8]. This cross-sector code of practice primarily targets the UK industry, but the actual impact may be broader.

5. Sources to corroborate the impact

5.1 A letter of support from Director of RPC Radiy, & Vyacheslav Kharchenko, Head of Centre for Safety Infrastructure Oriented Research & Analysis at Radiy, Research & Production Corporation, Ukraine.

5.2 A letter of support from Head of Dependability Research Lab at Intel Labs, Germany, and Co-Director of Inter Collaborative Research Institute (ICRI-SAV).

5.3 Dependability Assessment of Software for Safety Instrumentation and Control Systems at Nuclear Power Plants, IAEA Nuclear Energy Series, No. NP-T-3.27, available at:

https://www.iaea.org/publications/12232/dependability-assessment-of-software-for-safety-instrumentation-and-control-systems-at-nuclear-power-plants Accessed 14.12.2020.

5.4 PAS 11281: 2018 ("Connected automotive ecosystems – Impact of security on safety – Code of practice", sponsored by CPNI, 2018), ISBN 978 0 539 02394 7, 60 p., BSI 2018.

5.5 Royal Academy of Engineering, Cyber safety and resilience: strengthening the systems that support the modern economy, N. Jennings, Editor. 2018, Royal Academy of Engineering. p. 52.

5.6 Safety Case Template for Autonomous Systems, available at: http://arxiv.org/abs/2102.02625. Released early in 2021, but developed and shared with various stakeholders in 2020, hence included here. Accessed 04.12.2020.

5.7 Assurance 2.0 Manifesto, available at: https://arxiv.org/abs/2004.10474 Accessed 14.12.2020.

5.8 Code of Practice: Cyber Security and Safety, IET No. 211014, sponsored by the National Cyber Security Centre (NCSC), 93 p., available at:

https://electrical.theiet.org/media/2516/cop_cyber-security-and-safety_linkable_secure.pdf. Accessed 14.12.2020.

Submitting institution
City, University of London
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Technological
Is this case study continued from a case study submitted in 2014?
Yes

1. Summary of the impact

This case study provides an account of work on the application of Bayesian networks to problems of interest in forensic genetics. Developed in research at City, University of London

an Expert System-based software programme has dramatically improved the speed, efficiency and cost-effectiveness of forensic laboratory methods used in familial DNA analysis and in criminal investigation. The software has been in continuous use by the UK's leading forensic supplier, Eurofins plc, to analyse DNA evidence for forensic identification in determining suspects in violent crimes, solving paternity/maternity, in cases of missing persons and identifying human remains of victims from mass disasters including at Grenfell Tower. The software is among new technical methods that can be used on the surviving evidence of ‘cold cases’, often with conclusive results.

2. Underpinning research

Dr Robert Cowell has spent over 20 years carrying out joint research with Professor Steffen Lauritzen FRS currently Professor of Statistics at the University of Copenhagen and Professor Julia Mortera of Universita Roma Tre, Italy on the modelling and analysis of DNA mixture samples that are recovered from crime scenes. Given one or more such DNA samples, two questions of particular interest that their work addresses are: (1) given the DNA profile of a possible suspect, what is the weight of evidence that the suspect contributed their DNA to the sample?; and (2) if no suspect is at hand, can we obtain the genetic profile or profiles of the unknown contributors to the crime sample, for use for example in a search of a DNA database of previously convicted individuals to look for a DNA match? An extension of the second question is familial DNA searching which involves the search of a DNA database to detect and statistically rank a list of potential candidates in the DNA database who may be close biological relatives (e.g., parent, child, sibling) to the unknown individual contributing the evidence DNA profile, combined with lineage testing to help confirm or refute biological relatedness. [3.1],[3.2]

The determination of the likelihood that two or more individuals are biologically related, based on their DNA profiles is of interest in both civil and criminal application. A common civil application occurs in a case of disputed paternity in which, for example, a mother claims that the father of her child is a certain male, but he denies this. A related criminal application is that of incest, for example when a man is suspected of fathering a child by his daughter. The establishment of relatedness of individuals can also be important for immigration cases. The use of Probabilistic Expert Systems (PESs), also called Bayesian networks in familial DNA searching has great potential in these cases, however, general purpose PES software is not particularly well-suited to the repetitive tasks of specifying an appropriate set of marker networks for a specific problem, editing the many local conditional probability tables and combining evidence from several genetic markers to evaluate likelihoods. Such software can be time-consuming and error-prone because of the number and sizes of the tables requiring specification in the Bayesian networks. [3.2]

The creation and development of a novel prototype computer programme called FINEX overcame these problems [3.3]. FINEX was originally written to automate the process of constructing Bayesian networks PESs and to provide a user-friendly interface by reproducing the usual genetic tree in the computer screen. Thus, the program shields the user from the technicalities and tedium of specifying Bayesian networks directly, eliminating errors that could arise in specifying the conditional probability tables. The Bayesian network is used to structure a definite genetic problem (in our case, a disputed relationship) in terms of a graphical model (with elementary deterministic relations, probabilistic computational nodes and a query node). Later, FINEX was extended to carry out the calculations itself, without having to export the Bayesian networks to a separate piece of software for analysis.

FINEX allows a user to express the structure of a forensic identification problem in a quick and simple manner through the syntax of a high-level graphical specification language. This allows quite complex hypotheses to be entertained regarding the relationships of individuals which could be so complex that an expert forensic scientist could not do the calculations. The user of the programme specifies two or more hypothetical relationships and the software evaluates the likelihood of the hypothetical relationships between known genetic profiles being actual. Assessments are made based on the differences of the likelihoods of the hypotheses.

It is the speed of the program for carrying out the probability calculations, approximately 2000 individual profiles per minute, that makes a large-scale search of a database possible in a reasonable time. The algorithms by which FINEX converts the user input in the graphical specification language and data on observed markers to the Bayesian networks used in PES are described in research output.

Another genetic-related area of research is that of learning the pedigree, or family tree, of a group of closely related individuals given the DNA proles of the individuals. This can have a number of applications. It is of interest for biologists studying animal populations. In the human domain there are two of particular interest. One is of identifying family groups in a mass

grave, the other is that of identifying deceased victims in a mass disaster even such as a plane crash by comparing their DNA with that of living relatives. The result of the Theoretical Population Biology paper [3.4] presents an algorithm for carrying out an exhaustive search for up to 30 individuals, and software that I developed for this has been incorporated into the public domain FRANz pedigree reconstruction software available from the University of Leipzig [3.5].

Cowell works at the interface of Artificial Intelligence and Statistics and has worked on topics such as decision theory, statistical inference, Bayesian networks, graphical models and expert systems. His main area of research for the last twenty years has focussed on the interface of probabilistic graphical models, machine learning, and artificial intelligence, in particular in the theory and application of Bayesian Networks. These have come to high prominence because of their flexibility, widespread applicability and computational efficiency.

3. References to the research

3.1 Cowell, R.G., Lauritzen, S.L. and Mortera, J. (2007a). A gamma model for DNA mixture analyses, Bayesian Analysis, 2(2), 333–48

3.2 Cowell, R.G., Lauritzen S. L. and Mortera J. (2011), ' Probabilistic expert systems for handling artifacts in complex DNA mixtures', Forensic Science International: Genetics, 5(3), p.202-209

3.3 Cowell R.G. (2003). FINEX: a Probabilistic Expert System for forensic identification, Forensic Science International, 134(2-3), p.196-206

3.4 Cowell, R.G. (2009) A simple greedy algorithm for reconstructing pedigrees, Theoretical Population Biology, 83 p.55-63 (February)

3.5 FRANz Beta Pedigree Reconstruction: A fast and flexible parentage inference program for natural populations. available for free download [Information retrieved 11 May 2020]

Indicators of quality for the underpinning research:

Research was supported by a The Leverhulme Trust Interchange Research Grant: Bayesian networks for forensic inference from genetic marker, October 2001 to September 2004. Principal Investigator: A. Philip Dawid FRS then at University College London. Funding value: £98,330 Grant number: F/071134/K

Dr Cowell was awarded the 2001 DeGroot Prize by the International Society for Bayesian Analysis along with A. Philip Dawid FRS, Steffen Lauritzen FRS, and Sir David J. Spiegelhalter FRS ‘for research concerned with fundamental issues of statistical inference, decision theory and/or statistical applications.’

4. Details of the impact

FINEX software was introduced to the DNA Unit of the UK Forensic Science Service (UK FSS) in 2006 and was used regularly in criminal casework. Following the closure of the Forensic Science Service in 2012/3 the software was licenced to VidaVia Media SL which has collaborated in a further re-branding of the application as GPS-ibd, a trademark of Gene Pool Systems (a trading name of VidaVia Media SL). “ibd” signifies “Identical By Descent”. The software has been licenced to three users in this REF period: United States Department of Defence, LGC Forensics, and ForGenetica Consultants Ltd which has carried out further functional and casework testing and support for commercial purchases of GPS-ibd. [5.1], [5.2]

GPS-ibd provides fast throughput of cases and decisions and is used in standard relationship forensic casework for use in routine civil and court ordered paternity tests, relationship analysis for criminal casework (including rape and incest cases), coronial work, missing persons and mass fatality cases, and commercial contract relationship analysis for European and Middle Eastern clients. In addition, an advanced processing functionality of GPS-ibd allows the application to be used for Familial Searching of the Police/National DNA database. [5.3]

LGC plc, the largest player in UK’s forensics market since the closure of UK-FSS reported that the product was “ successful” and “ Ground breaking”. LGC plc has used the software continuously on “ some of the most high-profile cases in the past 2 years (including cases that have been highly publicised, highly political and deeply tragic)”. [5.4]

In March 2016, LGC was successful in gaining IS0/IEC17025:2005 accreditation through the United Kingdom Accreditation Service, under schedule no. 0003 for Relationship Analysis Services using GPS-ibd software, effective from 12 February 2016. ISO/IEC 17025:2005 specifies the general requirements for the competence to carry out tests and/or calibrations, including sampling. It covers testing and calibration performed using standard methods, non-standard methods, and laboratory-developed methods [5.5]. ISO accreditation is an important milestone in product development. It gives assurance to potential clients and makes the results less open to challenge by the courts.

Commenting on the introduction of the new GPS-ibd service following its accreditation, Dr Tim Clayton, said “ The GPS-ibd software has significantly increased our capabilities to deal with the more complex pedigrees that can be encountered in forensic casework and allows us to provide a more comprehensive service to our customers. In addition, as the software does not require expensive servers or any other system architecture, it can be deployed on a desktop and runs calculations within seconds. The automation of complex calculations means that computation is less error prone and far more efficient.” [5.5]

Cases where GPS-ibd was used:

In 2017, pharma and life sciences company Eurofins Scientific acquired the forensics division of LGC, the largest player in UK’s forensics market. Along with the transfer of the trade of that division, 706 employees and the licence for GPS-ibd were transferred to Eurofins Scientific. According to a Senior Reporting Scientist – DNA Science Lead at Eurofins Forensic Services the software “ is used almost every day” and that it “ has been used in hundreds of criminal cases involving kinship issues, most commonly rape”. [5.6]

Immigration DNA testing is now used by the Home Office to confirm disputed relationships, and GPS-ibd is used to process samples in immigration DNA tests that involve kinship disputes, for example, where an unaccompanied child refugee has applied to be reunited with family members already in the country. The software has also been used in immigration cases by various border and home security authorities in several other countries including Norway and the State of Kuwait [5.7].

Eurofins also uses the software to routinely carry out DNA identification of bodies for Coronial purposes. Coroners are tasked to not only determine the cause of death but to investigate or confirm the identity of unknown person(s) who has been found dead within their jurisdiction [6]

News media often focus on new DNA-based techniques that are helping authorities pursue investigations that defy conventional approaches, or even reopen investigations that were suspended long ago. Advances in the field of forensic DNA testing are helping to solve a broadening range of difficult cases, including unidentified persons, sexual assaults, and homicides. GPS-ibd software is included in these new techniques and have enabled forensic cold case reviews. A specific case from 2019, North Yorkshire Police confirmed the identity of a south-east Asian woman whose body was discovered in a remote location on the Pennine Way, between Pen-y-ghent and Horton in Ribblesdale, on 20 September 2004. GPS-ibd was used to assist with her identification. [5.8]

Over the course of Troubles in Northern Ireland there were 16 people who “disappeared”. The Provisional IRA admitted to being involved in the forced disappearance of nine of the sixteen victims, mostly in a statement issued in 1999. One victim was admitted to by the INLA. No attribution has been given to the others. The Independent Commission for the Location of Victims' Remains (ICLVR) was established under The Good Friday Agreement between the UK and Irish Governments in connection with the affairs of Northern Ireland, in order to locate 16 missing Irish and British people presumed murdered during The Troubles. To date, the remains of twelve of ‘the disappeared’ have been recovered, ten of whom have been recovered through the ICLVR’s efforts. The GPS-ibd software has been used extensively in this REF period to assist the ICLVR in identifying the ‘disappeared’ [5.7], [5.9].

Familial DNA analysis is not only helpful in the identification of the criminal, but also for the identification of the deceased in case of mass disaster victim’s identification or in other cases, such as plane crash etc. In this REF period the software has been used in identifying bodies following disasters, perhaps the highest profile case was to assist in identifying the victims of the Grenfell Tower fire in 2017 which caused 72 deaths and was the worst UK residential fire since the Second World War [5.6]. The choice of GPS-ibd for this work was in part due to its success in the DNA identification of victims of the South-East Asia tsunami in Thailand and the Air France flight 447 air crash investigation in earlier times.

There are few more impactful roles for science/statistics than its application to crime. The impact on the public is direct and palpable. There are few projects (perhaps with the exception of medicine) where the public benefit is so obviously there.

5. Sources to corroborate the impact

5.1 Vidavia Group Facebook (5 August 2015) ‘we've managed to sign an agreement with City University to take forwards the brilliant FSS-ibd (aka FSS DNA Lineage) software’. Retrieved from: https://www.facebook.com/VIDAVIAgroup/posts/100547386966595

5.2 Vidavia Group – Portfolio. Retrieved from http://www.genepoolsystems.com/

5.3 Maguire, Chris N. (2015) Development Timeline for GPS-ibd, Internal document documenting product history for VIDAVIA Media. 16th October, (5 pages) available on request.

5.4 Email communication between the Director, VIDAVIA MEDIA S.L. and City, University of London Tech Transfer Office [Received Tue 24/04/2018 11:46] – available on request.

5.5 LGC Group Company News - LGC accredited for Relationship Analysis Services using ground breaking software. Published on 29 MAR 2016. Retrieved from: https://web.archive.org/web/20160701000000*/https://www.lgcgroup.com/about-us/media-room/latest-news/2016/lgc-accredited-for-relationship-analysis-services/

5.6 Personal testimony from Dr Tim Clayton MBE, Senior Reporting Scientist, DNA Science Lead, Eurofins Forensic Services [received 18 February 2020]

5.7 Cases that used GPS-ibd. Personal Communication and testimony from Director of ForGenetica Consultants Ltd. [21 January 2020] - available on request.

5.8 Woman identified in 2004 Pen-y-Ghent body cold-case investigation, North Yorkshire Police News [Last modified: 19 March 2019 at 02:22pm] Testimony from Dr Tim Clayton [5.6] confirms that GPS-ibd software was used to assist with her identification.

5.9 Henry McDonald, Remains confirmed as IRA 'disappeared' Séamus Wright and Kevin McKee, The Guardian, Tue 8 Sep 2015 13.49 BST. Testimony from Dr Maguire [5.7] states that GPS-ibd was used to forensically identify the bodies.

_________

Note: LGC Forensics was acquired by Eurofins in 2017. Dr Clayton’s testimonies above have different employer affiliations depending on the year in which they were given.

Showing impact case studies 1 to 4 of 4

Filter by higher education institution

UK regions
Select one or more of the following higher education institutions and then click Apply selected filters when you have finished.
No higher education institutions found.
Institutions

Filter by unit of assessment

Main panels
Select one or more of the following units of assessment and then click Apply selected filters when you have finished.
No unit of assessments found.
Units of assessment

Filter by continued case study

Select one or more of the following states and then click Apply selected filters when you have finished.

Filter by summary impact type

Select one or more of the following summary impact types and then click Apply selected filters when you have finished.

Filter by impact UK location

UK Countries
Select one or more of the following UK locations and then click Apply selected filters when you have finished.
No UK locations found.
Impact UK locations

Filter by impact global location

Continents
Select one or more of the following global locations and then click Apply selected filters when you have finished.
No global locations found.
Impact global locations

Filter by underpinning research subject

Subject areas
Select one or more of the following underpinning research subjects and then click Apply selected filters when you have finished.
No subjects found.
Underpinning research subjects