Skip to main

Impact case study database

The impact case study database allows you to browse and search for impact case studies submitted to the REF 2021. Use the search and filters below to find the impact case studies you are looking for.

Search and filter

Filter by

  • University of Durham
   None selected
  • 11 - Computer Science and Informatics
   None selected
   None selected
   None selected
   None selected
   None selected
   None selected
Waiting for server
Download currently selected sections for currently selected case studies (spreadsheet) (generating)
Download currently selected case study PDFs (zip) (generating)
Download tags for the currently selected case studies (spreadsheet) (generating)
Currently displaying text from case study section
Showing impact case studies 1 to 3 of 3
Submitting institution
University of Durham
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Technological
Is this case study continued from a case study submitted in 2014?
No

1. Summary of the impact

Our development of novel algorithms for automatic threat item detection and associated capabilities for use within X-ray and Computed Tomography (CT) security scanners informed government policy, commercial product development and airport technology planning in this area. This research, carried out between 2014 and 2020 at Durham University, directly:

  • informed UK/US government policy for aviation and border security screening

  • provided new enhanced software capabilities within the design of security scanners by Gilardoni S.p.A. , Kromek PLC, Micro-X Ltd, Rapiscan Ltd and Smiths Detection

  • informed the design, development and evaluation of new software capabilities for security scanners with Cosmonio BV, Battelle, L3Harris Technologies and VisionMetric Ltd

  • informed the procurement of airport security screening products at Gatwick Airport

These impacts directly contribute to the security of over 500 million passenger journeys per annum with additional potential reach to 2-3 billion passenger journeys across 30+ countries globally.

2. Underpinning research

This research relates to the use of automated image understanding techniques to provide enhanced automatic screening for the detection of threat or contraband items within X-ray and CT security imagery. This addresses the question “What is in the bag/parcel/package?” (Figure 1) . Conventional X-ray screening offers multiple-view (2D) images of a scanned item (Figure 1, left) whilst recent advances in high-throughput CT imaging offer a full 3D volumetric image (Figure 2, left). Across both domains, the key challenge is to be able to address this issue with a high probability of detection (true positive) whilst minimising the false alarm rate (false positive) within highly cluttered scanner imagery under real-time processing bounds (as the item passes through the scanner). Furthermore, the aim must be achieved whilst working from limited example data availability (compared to other image understanding domains).

Embedded image

Fig1: Exemplar X-ray image (single view, 2D – left) and CT image (3D volume, rotatable – middle) imagery of a baggage and the on-site X-ray testing facilities at Durham University (right).

The Durham research team pioneered the use of object recognition algorithms for this task: firstly, by extending existing object recognition paradigms for use with 3D CT security imagery [R1, R2, R5] and subsequently by introducing the use of deep convolutional neural network (CNN) based approaches to the domain of X-ray based security screening [R3, R4]. Furthermore, we were able to extend this research to support the related research question of how to perform threat image projection (TIP) within CT imagery i.e. “How do we realistically insert a threat item into an otherwise benign 3D bag image”? [R5, R6]. TIP inserts such “fake” threat items into an otherwise benign baggage item in order to monitor security screening operator performance. This is a legally mandated process already in place for all X-ray based aviation security screening processes (UK/EU and beyond). In addition, it will now be increasingly used for 3D CT-based aviation security screening processes, due to the move to new mandatory UK/EU security screening standards for both hold baggage (ECAC EDS standard 3 - EU Regulation 1087/2011, September 2020) and cabin baggage (ECAC EDSCB C1-C3, December 2022) – both of which require the 3D CT security screening. The key findings of this research at Durham were that:

  • object detection can be performed using an extended 3D bag-of-visual words architecture in cluttered CT imagery to achieve ~98% true positive detection with low false positive <1% [R1].

  • plausible TIP within 3D CT imagery can be performed, by using materials-based segmentation for void space determination [R2] with lesser concern for noise and artefact filtering [R5], within an optimisation-based emplacement framework [R6].

  • transfer learning, with a CNN architecture pre-trained on visible-band imagery and refined on X-ray imagery, enables object detection in cluttered X-ray imagery with a high true positive (98%+) and low false positive <1% despite limited data set availability and inherent differences in image characteristics (projection, spectral-band, colour/texture) [R3, R4] (Figure 2).

  • the levels of performance achieved by these CNN approaches are repeatable over multiple datasets including those independently provided by the UK government [R4].

  • deep neural networks enable object detection in cluttered X-ray imagery [R3, R4] significantly outperforming earlier feature-based approaches (83% true positive / 3% false positive, [R4]).

3. References to the research

  • 3D Object Classification in Baggage Computed Tomography Imagery using Randomised Clustering Forests (A. Mouton, T.P. Breckon, G.T. Flitton, N. Megherbi), In Proc. International Conf. on Image Processing, IEEE, pp. 5202-5206, 2014. [ DOI]

  • Materials-Based 3D Segmentation of Unknown Objects from Dual-Energy Computed Tomography Imagery in Baggage Security Screening (A. Mouton, T.P. Breckon), In Pattern Recognition, Elsevier, Volume 48, No. 6, pp. 1961–1978, 2015. [ DOI]

  • Transfer Learning Using Convolutional Neural Networks For Object Classification Within X-Ray Baggage Security Imagery (S. Akcay, M.E. Kundegorski, M. Devereux, T.P. Breckon), In Proc. Int. Conf. on Image Processing, IEEE, pp. 1057 -1061, 2016. [ DOI]

  • Using Deep Convolutional Neural Network Architectures for Automated Object Detection and Classification within X-ray Baggage Security Imagery (S. Akcay, M.E Kundegorski, C.G. Willcocks, T.P. Breckon), In IEEE Transactions on Information Forensics & Security, IEEE, Volume 13, No. 9, pp. 2203-2215, 2018. [ DOI]

  • On the Relevance of Denoising and Artefact Reduction in 3D Segmentation and Classification within Complex Computed Tomography Imagery (A. Mouton, T.P. Breckon), In J. of X-Ray Science and Technology, IOS Press, Volume 27, No. 1, pp. 51-72, 2019. [ DOI]

  • A Reference Architecture for Plausible Threat Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes (Wang, Q., N. Megherbi, T.P. Breckon), In J. of X-Ray Science and Technology, IOS Press, Volume 28, No. 3, pp. 507-526, 2020 [ DOI]

Papers were peer reviewed as part of the publication process and show clear originality / rigour.

4. Details of the impact

X-ray, and more recently CT-based, security screening is used in both transport and border security for the detection of explosives, weapons and contraband for the purposes of both law enforcement and counter terrorism. Whilst the regulatory mandate for this technology in aviation and border security is a policy matter for government, the technical development of security scanners to meet these regulatory requirements is carried out by commercial suppliers. The research work at Durham [R1-R6] has had both policy and commercial impact.

Impact on Government Policy: Based on this research at Durham, Professor Breckon has:

  • advised the UK government in his role as a member of the Cabinet Office Cyber Experts Group, under the direction of the Civil Contingencies Secretariat reporting to the Chief Scientific Advisor for National Security, on aspects of computer technology within transport and border security (2015-present) [E1]. [REDACTED]

  • advised the UK Government Office of Science (2016-2017), reporting directly to the Chief Scientific Adviser (CSA) to HM Government, [REDACTED]

  • enabled the test and evaluation of reference detection algorithms from [R4] on classified X-ray imagery by scientists at the UK Home Office [REDACTED]

  • informed US Department of Homeland Security (US DHS) – Transportation Security Administration (TSA) [REDACTED]

Embedded image

Fig2: Exemplar object detection in X-ray image for threat detection (single-view, 2D–left) [R4].

Commercial Impact: Based on this research at Durham:

  1. Rapiscan Systems Ltd (UK) [REDACTED]

Conservative estimates put Rapiscan at a 40% market share of the global CT security scanner market (source: European Commission, DG Competition - Case M.8087). With the move to CT to meet the new security standards, this translates as a global reach of 1.64 billion passenger journeys per annum (based on: IATA statistics, 2018).

  1. Smiths Detection GmBH (Germany) [REDACTED] Conservative estimates put Smiths Detection at a 30% market share of the global CT security scanner market (source: European Commission, DG Competition - Case M.8087) – with the move to CT to meet the new security standards this translates as a global reach of 1.23 billion passenger journeys per annum (based on: IATA statistics, 2018).
  • Cosmonio Imaging BV (Netherlands) [REDACTED] “in collaboration with X-ray scanner manufacturer L3 Harris Technologies, [REDACTED] (resulting company income - GBP260k) [E7]. [REDACTED] (contributing to the acquisition of Cosmonio by Intel in 2020). Conservative estimates put L3 Harris as having at a 9% market share of the global X-ray security scanner market (source: X-Ray Screening Report, 360 Market Updates, 2019) - 369 million passenger journeys per annum (IATA statistics, 2018).

  • Gilardoni S.p.A. (Italy) has integrated an algorithm from the research of [R4] within their X-ray scanner product line resulting in an industry showcase at the UK Security Expo (2017) [REDACTED] “Today, detection is done by measuring the atomic number [of an object]. Now we want to include shape recognition [to be commercialised by Gilardoni] and integrated into its products ... [This] will give us much more precise detection ... [and] increase our market share by having better equipment than our competitors ... The market wants to see these ... upgrades as quickly as possible. .. Whilst all new Gilardoni equipment should carry the latest algorithm [from Durham], there is also a strong possibility that it will be retrofitable on legacy systems.” Andrea Rotta, product specialist Gilardoni [E8].

  • Kromek Ltd (UK) [REDACTED]

  • Micro-X Ltd (Australia) [REDACTED]

  • VisionMetric Ltd (UK) [REDACTED]

  • Gatwick Airport (UK) [REDACTED]

  • Battelle (USA) [REDACTED]

Summary: Research from the Durham team [R1-R6] has significantly impacted government regulatory policy in the US and UK (representing the 1st and 4th largest national passenger volumes - source: World Bank, 2018), directly helps secure 500+ million passenger journeys annually and enjoys global commercial impact across nine businesses that includes commercial reach to over 70% of the market – securing up to 2-3 billion passenger journeys annually.

5. Sources to corroborate the impact

  1. Letter - Chief Scientific Adviser for National Security, HM Government, 9 Feb. 2015.

  2. Letter - [REDACTED]

  3. Testimonial – [REDACTED] DSTL, HM Government, Feb. 2020.

  4. Testimonial – [REDACTED] US DHS, Dec. 2020 & TSA - Report to US Congress, 2019.

  5. Testimonial – Rapiscan Systems Ltd, [REDACTED] 2020

  6. Testimonial – Smiths Detection, [REDACTED]

  7. Testimonial – Cosmonio Ltd, [REDACTED] (Feb. 2020) and CEO + L3Harris presentation [REDACTED] (2018).

  8. “Automation accelerates the checkpoint - Electronics screening research blends with machine learning for improved accuracy and efficiency”, Jane’s Airport Review (2018, 30(1)).

  9. Testimonial – Kromek Ltd, [REDACTED] Feb. 2020

  10. Testimonial – Micro-X Ltd, [REDACTED], Feb. 2020

  11. Testimonial – VisionMetric Ltd, [REDACTED] Feb. 2020

  12. Testimonial – Gatwick Airport, [REDACTED] Sept. 2020

  13. Testimonial – Battelle, [REDACTED] Dec. 2020

Submitting institution
University of Durham
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Technological
Is this case study continued from a case study submitted in 2014?
No

1. Summary of the impact

Our development of novel algorithms for combined real-time object detection, classification,

localisation and tracking within automated wide-area surveillance, carried out between 2013 and 2016 at Durham University, directly:

  • informed UK government science, technology and procurement policy for technical interface standardisation within wide-area, multi-sensor surveillance systems.

  • informed scientific policy-making activity by the governments of UK, USA, Canada, Australia, New Zealand and Netherlands on wide-area, multi-sensor surveillance systems.

  • informed the design, development and evaluation of wide-area surveillance sensors and associated technologies that were developed commercially by AptCore, Autonomous Devices, Blue Bear Systems Research, Createc, Cubica Technology and QinetiQ.

This directly contributed to GBP23.2 million investment in multi-sensor surveillance systems (UK/US government/industry), GBP11.9 million of additional commercial income and supported the creation of ~55 additional science and engineering jobs across six organisations (2013-2020).

2. Underpinning research

This research uses automated image understanding to provide long-term wide-area surveillance of dynamic scene objects (people, vehicles) addressing questions such as - “Is there anything there?” (detection), “What is it?” (classification), “Where is it?” (localisation) and “What is its behaviour?” (tracking). The key challenge is to be able to address these issues within real-time processing constraints (“as it happens”) and within the processing capability available within the size, weight and power (SWaP) constraints of a field-deployable, long-duration sensor unit. The Durham research team were able to introduce the idea that all-weather, long-term automated visual surveillance addressing all of these key questions could be achieved in real-time, within such a field-deployable unit, by the integrated co-design of both the sensor units (hardware, [R4]) and the associated algorithms (software [R1-R3], Fig. 1). Furthermore, we were able to develop a novel set of algorithms, suited specifically to our key design decision to perform sensing in the far-infrared (thermal) spectrum, enabling real-time processing performance within a field-deployable, all-weather sensor unit.

Embedded image

Figure 1: Field-deployable thermal imaging sensor units developed by Durham University (left/middle), and vehicle detection/tracking integrated within the SAPIENT user interface (right) – (Crown copyright).

This sensor unit design (hardware), and associated real-time algorithms (software), contributed to the design and validation of a novel wide-area surveillance concept that allowed sensors of varying capability and sensing modality to participate in a common sensor network by dynamically declaring availability, integrity and capability via a common interface [R4].

The key findings of this Durham-based research were that:

• robust all-weather detection and classification of pedestrians [R1] and vehicles [R3] can be achieved in real-time, within a limited field-deployable computational footprint, using far-infrared (thermal) sensing and a unique algorithmic combination of a multiple mixture of Gaussian background models to overcome inherent variations within the thermal imagery coupled with a feature-based foreground classification model [R1,R3]. This meant that objects could be detected and classified by type (pedestrian [R1], vehicle [R3]) during long-duration sensor deployment despite extreme changes in environmental conditions (illumination, temperature).

• photogrammetry can enable the passive recovery of 3D object position within the scene relative to the camera within far-infrared (thermal) video imagery and deliver positional accuracy to within the expected error bounds of GPS (for object localisation, [R1]). Furthermore, the expected impact of variations in human pose (for pedestrian targets) on this accuracy could be readily overcome via the use of the regressive posture estimation and subsequent algorithmic correction [R2]. This meant that objects detected and classified within the scene, could be tracked within real-world 3D co-ordinates via the use of photogrammetry based algorithms [R1, R2] without the need for active sensing as accurately as if explicit GPS trackers were placed upon the objects themselves. Our ability to process 3D object localisation in real-time further facilitated an extension to commonly-used Kalman filter based tracking, traditionally used for 2D pixel-wise object tracking, to full 3D object tracking in the scene [R1, R2].

• the combined use of detection, classification and (3D) tracking within far-infrared [R1-R3] further enables the robust secondary classification of objects to determine sub-type (for vehicles - car, van, 4x4, HGV [R3]) or behaviour (for pedestrians - running, walking, loitering, digging, crawling [R1, R2]) based on an accumulated feature representation [R1-R3].

• imperfections in the combined detection/classification/localisation/tracking capability of any deployed sensing modality and associated processing, such as far-infrared [R1-R3], can be mitigated via integration into a non-homogeneous sensing network, via a suitably defined common interface specification, such that these short-comings within current ambient conditions can be overcome via fusion with complementary sensing technologies [R4].

This Durham-based research contributed to the SAPIENT (Sensing for Asset Protection using Integrated Electronic Networked Technology) project (2014-2016) that developed a novel integrated, multi-modal wide-area surveillance network such that “ individual [deployed] sensors make low-level decisions autonomously … [and] are managed by a decision-making module which controls the overall system ... to reduce … the need to constantly monitor the output of the sensors.” [R4]. This joint research was carried out in collaboration with the Defence Science and Technology Laboratory (DSTL) (system requirements), QinetiQ (systems integration, visibleband camera sensors), Cubica Technology (decision-making module), AptCore (radar sensors), Createc (laser range sensors) and Durham (thermal imaging sensors) resulting in a joint publication [R4].

3. References to the research

[R1]. A Photogrammetric Approach for Real-time 3D Localisation and Tracking of Pedestrians in Monocular Infrared Imagery (M.E. Kundegorski, T.P. Breckon), in Proc. SPIE Optics and Photonics, Volume 9253, No. 01, pp. 1-16, 2014. [ http://dx.doi.org/10.1117/12.2065673]

[R2]. Posture Estimation for Improved Photogrammetric Localisation of Pedestrians In Monocular Infrared Imagery (M.E. Kundegorski, T.P. Breckon), in Proc. SPIE Optics and Photonics, Volume 9652, No. XI, pp. 1-12, 2015. [ https://doi.org/10.1117/12.2195050]

[R3]. Real-time Classification of Vehicle Types within Infra-red Imagery (M.E. Kundegorski, S. Akcay, G. Payen de La Garanderie, T.P. Breckon), in Proc. SPIE Optics and Photonics, Volume 9995, pp. 1-16, 2016. [ https://doi.org/10.1117/12.2241106]

The following is a joint publication on the overall SAPIENT concept with authors from all of the research partners:- DSTL (Thomas) QinetiQ (Marshall, Faulkner, Kent), Cubica Technology (Page, Islip), AptCore (Styles), Createc (Clarke), Durham (Breckon, Kundegorski):

[R4]. Towards Sensor Modular Autonomy for Persistent Land Intelligence Surveillance and Reconnaissance (P.A. Thomas, G.F. Marshall, D. Faulkner, P. Kent, S. Page, S. Islip, J. Oldfield, T.P. Breckon, M.E. Kundegorski, D. Clarke, T. Styles), in Proc. SPIE Ground/Air Multisensor Interoperability, Volume 9831, No. VII, pp. 1-18, 2016. [ https://doi.org/10.1117/12.2229720] [ Sapient Project YouTube Summary Video]

Papers have been peer reviewed as part of the publication process and show clear originality and rigour

4. Details of the impact

Effective wide-area surveillance is a key area of interest within both military operations and in civil infrastructure protection as it offers situational awareness – i.e. knowing what is happening and where it is happening within the environment (source: MoD Science & Technology Strategy, 2017).

Whilst the specification and evaluation of the operating requirements for these surveillance tasks is a policy matter for government, the technical development of sensing technologies to meet these requirements is carried out by commercial suppliers. In the UK, government science and technology policy in this area is informed by the UK Defence Science and Technology Laboratory (DSTL), an executive agency of the UK Ministry of Defence whose aim “is to maximise the impact of science and technology for UK defence and security” (DSTL). The research work at Durham [R1-R4] has had both policy and commercial impact within this area.

Impacts of Durham research on UK government policy are as follows:

• informed UK government science and technology policy for wide-area surveillance systems by enabling the UK DSTL to perform the “design, evaluation and validation of a new common interface specification for integrating varying types of intelligent networked sensors .. with direct reference to the performance characteristics of the [Durham] all-weather, all-condition camera-based thermal ... and the … requirements of the state-of-the-art real-time object detection, classification and tracking algorithms [provided by Durham -R2, R3]” [E1]. This was supported by “a total budget of GBP3 million (2013-2016), with additional technical and management support from ~10 DSTL staff scientists” - DSTL [E1]. The resulting UK government science and technology policy publication (SAPIENT Middleware Interface Control Document (ICD)) was publicly released under the UK Open Government Licence at ( http://www.gov.uk/sapient). Paul Thomas, Principal Scientist at DSTL, commented that this enabled “an important step forward in enabling sensors to ‘plug and play’ … [a] significant benefit both in the area of civil security such as the protection of infrastructure and in military systems such as for base protection.” [E2].

• informed UK government scientific policy-making activity by enabling the DSTL to test and evaluate SAPIENT “ for integrated wide-area surveillance within the inclusion of the thermal imaging solution provided by Durham [R1-R3], against a range of staged scenarios” including *“two large-scale technology demonstrations to inform ~60 senior personnel spanning UK military and civil infrastructure protection (from UK Ministry of Defence (MoD), Centre for Protection of National Infrastructure, Home Office, Dept. Transport) of the ‘art of the possible’ for integrated wide-area surveillance … (Malvern, Sept. 2015 / Throckmorton Airfield, June 2016)”* [E1] and technical evaluation “ against a range of unmanned aerial system [drone] targets flown in a variety of attack trajectories, using radar and electro-optic [sensors; R4].” [E3]. “The demonstrations provided an excellent showcase of sensor technology that was very useful in helping both military and civil [policy] stakeholders understand the progress, and more importantly the potential, for the technology.” - DSTL [E1].

• informed the UK government procurement policy for wide-area surveillance systems via the Defence and Security Accelerator (DASA). This led to the SAPIENT Middleware ICD, enabled by Durham research [in R1-R4], being adopted as the preferred interfacing option for commercial system suppliers to the UK MoD across GBP11.3 million of commercial research and development funding (2017-2020) [E4]. Furthermore, SAPIENT has been adopted for autonomous sensor management within the joint US Army / UK MoD procurement “Signal and Information Processing for Decentralized Intelligence, Surveillance, and Reconnaissance” (W911NF17S0003-US-UK, USD1.2 million, 2020). The UK Minister for Defence Procurement, Andrew Stuart MP, commented that by informing policy within UK defence procurement, SAPIENT “can act as autonomous eyes in the urban battlefield. Investing millions in advanced technology like this will give us the edge in future battles. It also puts us in a really strong position to benefit from similar projects run by our allies as we all strive for a more secure world.” [E2].

• enabled collaborative experimentation internationally between the [governments of the] Five Eyes allied nations of Australia, Canada, New Zealand, the UK and the USA in the Contested Urban Environment experiment involving over 150 government and industry scientists and over 80 Canadian troops for three weeks in Montreal (2018)”* [E1]. This provided the five nations with a means to evaluate a range of differing sensors and operating methods for effective wide-area surveillance based on the work of [R1-R4] and informed the policymaking work of defence research scientists in the UK and partner nations. DSTL Chief Executive, Gary Aitkenhead, commented “[SAPIENT] is a fantastic example of our world-leading expertise at its best; our scientists working with our partner nations to develop the very best technology for our military personal now and in the future.” - [E2]. The collaboration has also informed the surveillance requirement policy of the military end user.

Embedded image

Figure 2: Briefing and site walk-around with demonstration attendees (upper), view from Durham sensor units (lower, left) and integrated multi-sensor visualisation via the common graphical display to operator (lower, right) – June 2016 (Crown Copyright).

“[SAPIENT] brings together our requirements as a user and DSTL as scientific advisers ... with our key allies in the five-eyes community.” said Lt Col Nat Haden, SO1 Intelligence, Surveillance, Target Acquisition and Reconnaissance Capability, British Army Headquarters [E2].

• enabled TNO (Netherlands, government research agency) to perform “rapid integration of raw and processed sensors by using open standards and the SAPIENT interface” [E3] designed and validated using [R1-R3] (2018). This allowed the Netherlands government to develop and evaluate additional novel sensing approaches to inform policy-making in wide-area surveillance and enabled international technical collaboration, via the SAPIENT ICD [E1], with UK-based commercial defence supplier, QinetiQ [E10].

• enabled the UK DSTL to perform “ongoing development of SAPIENT, via the DSTL-led SAPIENT Interface Management Panel, to the current SAPIENT Middleware Interface Control Document (Issue: 5.0, 11th May 2020) supported by an investment in excess of GBP5 million (2016- 2020) and ~20 research scientists” [E1] in order to revise and progress UK government science and technology policy in this area (2016-2020). “.. without the work of the Durham research team DSTL would not have been able to validate the performance and design of the SAPIENT interface or evaluate the SAPIENT concept of operation (con-ops) against the operating characteristics of this militarily important sensing modality. Moreover, the capability of the Durham [sensor] for detection, tracking and classification of behaviours … of pedestrians [R1, R2] provided a cogent demonstration of the military and security relevance of the wider SAPIENT system.” - DSTL [E1]

Commercial impact based on this research at Durham has enabled:

AptCore (UK), [REDACTED]

Autonomous Devices (UK), [REDACTED]

Blue Bear Systems Research (UK), [REDACTED] This enabled Blue Bear “ to deliver the UK’s most complex autonomous air vehicle trial [to date]” – Williams-Wynn [E7], with a single human operator simultaneously controlling “20 fixed wing drones to form a collaborative heterogeneous swarm” and collaborative “payloads and payload support from Plextek, IQHQ, Airbus and Durham University …” at RAF Spadeadam, Cumbria in September 2020 (resulting company income: GBP2.5 million, [E2]).

Createc (UK), [REDACTED]

Cubica Technology (UK) , [REDACTED]

QinetiQ (UK), lead technical authority on the SAPIENT interface definition, [REDACTED]

Summary: Research from the Durham team [R1-R4] has had policy impact and enabled collaborative policy-making activity with six national governments (Australia, Canada, Netherlands, New Zealand, UK, USA), contributed to a GBP23.2 million investment in multi-sensor surveillance (UK/US government/industry) and resulted in GBP11.9 million of additional commercial income to six companies supporting the creation of ~55 science and engineering jobs in the UK.

5. Sources to corroborate the impact

[E1]. Testimonial – Defence Science Technology Laboratory (DSTL), [REDACTED], August 2020.

[E2]. Press releases x 3 - DSTL / UK MoD, 18 December 2015, 24 Sept. 2018, 28 March 2019.

[E3]. Government white paper(s) x 2 – UK Government (DSTL): Thomas, P. et al. SPIE,

10802/108020D / Dutch Government (TNO): - Bouma, H. et al. SPIE , 10802/108020N, 2018.

[E4]. UK DASA + US Army – competition & contract award documents (x 5, 2017-2020).

[E5]. Testimonial – AptCore Ltd [REDACTED], August 2020.

[E6]. Testimonial – Autonomous Devices Ltd, [REDACTED], December 2020.

[E7]. Testimonial – Blue Bear Systems Research Ltd, [REDACTED], Feb. 2020 /

Press Article – ADS Advance, December, 2020.

[E8]. Testimonial – Createc Ltd, [REDACTED], August 2020.

[E9]. Testimonial – Cubica Technology Ltd, [REDACTED], August 2020.

[E10]. Testimonial – QinetiQ Ltd – [REDACTED], August 2020.

Submitting institution
University of Durham
Unit of assessment
11 - Computer Science and Informatics
Summary impact type
Technological
Is this case study continued from a case study submitted in 2014?
No

1. Summary of the impact

Our development of novel algorithms for automatic scene understanding for both on-road and off-road vehicles has enabled informed technology roadmapping and commercial product development for both driver assistance systems and vehicle autonomy (“driver-less cars”). This research, carried out between 2014 and 2019 at Durham University, has contributed to:

  • the realisation and large-scale evaluation of the “Human-like Guidance” navigation concept for driver/vehicle interaction at Renault

  • the design, test and evaluation process of several different sensor and algorithm options for on-road and off-road vehicle sensing systems at Jaguar Land Rover (JLR)

  • the design, test and evaluation of a novel vehicle localisation (positioning) sensor that is now in commercial production with Machines with Vision

  • the design and development of multi-camera 3D scene mapping sensor systems at ZF Race Engineering - Conekt Engineering Services (formerly TRW Conekt)

This directly informed the research and development at two of Europe's leading automotive manufacturers (France/UK: Renault / Jaguar Land Rover - annual combined R&D budget: ~GBP4.5 billion), enabled the supply of 3D scene mapping systems to the UK MoD (ZF) and supported the translation of vehicle localisation technology into rail, helping to protect 4.3 billion passenger journeys annually over ~57,000 km of track (Germany/UK: Machines with Vision).

2. Underpinning research

This research relates to the use of automated image understanding techniques for on-board vehicle sensing pertaining to both assistive driver technology (known as Advanced Driver Assistance Systems (ADAS)) and vehicle autonomy (i.e. “driverless cars”). Within this context we addressed the two key algorithmic tasks within on-vehicle scene understanding - “Where am I?” (known as the task of localisation) and “What is around me?” (known as the task of semantic scene understanding). The key challenge is to be able to address these tasks accurately, efficiently (i.e. in real-time relative to the vehicle speed) and robustly under varying environmental (weather) conditions for applications in both ADAS and vehicle autonomy.

In contrast to the prevailing trend of using an increasingly complex (and costly) array of sensors to support automotive sensing tasks, the Durham research team specifically targeted the use of low-cost camera sensors and research algorithmic approaches required to offer efficient and robust sensing under varying environmental conditions.

The key findings of this research at Durham were that:

  • real-time object detection and classification from a single on-board camera can be extended to simultaneously provide high-level sub-categorical attributes (orientation, state, colour) using a single end-to-end convolutional neural network (CNN) architecture (processing at up to 10 fps) and this outperforms traditional hand-crafted feature-driven approaches for such tasks [R1].

Embedded image

Figure 1: Algorithm output for object detection (top, [R1]), monocular 360° depth and 3D object detection (middle, [R6]}, stereo scene mapping (bottom left, [R2]) and off-road semantic scene understanding (bottom right, [R2, R4]).

  • semantic scene segmentation via CNN, where each image pixel is labelled by its object category, can be successfully extended from the urban to off-road driving environment via the use of transfer learning over a very limited dataset and outperform traditional hand-crafted feature-driven approaches [R2]. Using stereo depth information for semantic understanding [R4] offers only a marginal improvement over a single monocular colour camera [R2].

  • robust 3D scene mapping, of an urban driving environment, can be performed in the presence of dynamic scene objects (vehicles, pedestrians) based on the combined use of stereo visual odometry and structure from motion with an optimally calibrated real-time stereo depth estimation approach [R3].

  • “end-to-end” autonomous driving (i.e. monocular image input → speed + direction output) can be extended to the off-road environment based on the use of stereo visual odometry as a methodology for prior recovery of human off-road driving behaviour in order to use this as an input to an “end-to-end” CNN approach [R5].

  • real-time object detection and monocular depth estimation (from a single camera) can be extended to use 360° panoramic imagery, from low-cost consumer-grade spherical cameras, to offer both 3D object detection and ranging in addition to high granularity 3D scene depth around the entire vehicle as it transits an urban environment [R6]. In related work, the quality of the resulting 360° monocular depth recovered via [R6] is shown to be significantly superior to that obtained via the use of a conventional stereo vision pipeline with use of consumer-grade spherical (360°) cameras.

  • the placement, mounting and inter-synchronisation of multiple low-cost sensors with other on-board auxiliary equipment can be readily optimised for the on/off road to offer effective vehicle situational awareness to support broader ADAS and autonomy functionality overcoming issues such as vibration and fouling [R1-R6].

3. References to the research

  • On the Performance of Extended Real-Time Object Detection and Attribute Estimation within Urban Scene Understanding (K.N. Ismail, T.P. Breckon), In Proc. Int. Conf. on Machine Learning Applications, IEEE, pp. 641-646, 2019. [ 10.1109/ICMLA.2019.00117]

  • From On-Road to Off: Transfer Learning within a Deep Convolutional Neural Network for Segmentation and Classification of Off-Road Scenes (C.J. Holder, T.P. Breckon, X. Wei), In Proc. Euro. Conf. on Computer Vision Wks, Springer, pp. 149-162, 2016. [ DOI:10.1007/978-3-319-46604-0_11]

  • Generalized Dynamic Object Removal for Dense Stereo Vision Based Scene Mapping using Synthesised Optical Flow (O.K. Hamilton, T.P. Breckon), In Proc. International Conference on Image Processing, IEEE, pp. 3439-3443, 2016. [ http://dx.doi.org/10.1109/ICIP.2016.7532998]

  • Encoding Stereoscopic Depth Features for Scene Understanding in Off-Road Environments (C.J. Holder, T.P. Breckon), In Proc. International Conference Image Analysis and Recognition, Springer, pp. 427-434, 2018. [ https://doi.org/10.1007/978-3-319-93000-8_48]

  • Learning to Drive: Using Visual Odometry to Bootstrap Deep Learning for Off-Road Path Prediction (C.J. Holder, T.P. Breckon), In Proc. Intelligent Vehicles Symp., IEEE, 2018. [ 10.1109/IVS.2018.8500526]

  • Eliminating the Dreaded Blind Spot: Adapting 3D Object Detection and Monocular Depth Estimation to 360° Panoramic Imagery (G. Payen de La Garanderie, A. Atapour-Abarghouei, T.P. Breckon), In Proc. Euro. Conf. on Computer Vision, Springer, 2018. [ https://doi.org/10.1007/978-3-030-01261-8_48]

Papers have been peer reviewed as part of the publication process and show clear originality and rigour. Publication time-line: some research publications delayed for commercial reasons.

4. Details of the impact

Background: With the global car market comprising ~80+ million vehicle sales per annum, the global autonomous vehicle development market is estimated to grow from USD54.23 billion (2019) to USD556.67 billion in 2026 (source: Allied Market Research). The UK Industrial Strategy has seen over GBP200 million invested in UK Connected Autonomous Vehicle (CAV) research to date (UK Department for Transport), including ~GBP1 million on projects involving Durham University, and rising industrial research budgets annually from major automotive manufacturers and component suppliers (2019: Renault (~EUR4 billion), Jaguar Land Rover (~GBP2 billion), ZF (~EUR2.6 billion)).

Commercial Impact: Our underpinning research has enabled and informed the development, evaluation and safety testing of CAV/ADAS sensing in the UK and beyond:

  • Renault (France) used [R1,R3] to support the development of their “Human Like Guidance” (HLG) ADAS / CAV concept whereby the driver and vehicle communicate verbally based on a shared visual understanding of the road environment (i.e. with the vehicle acting as if it were a human co-pilot / e.g. vehicle → driver: “ Turn left before the red car on the right, just after the railings”).

In collaboration with Durham, Renault used real-time object detection techniques from [R1] to “understand the performance characteristics of current state-of-the-art scene understanding techniques when applied to the extended requirements of HLG, including the ability to robustly detect a wide-range of road-side and urban environment objects and their attributes.” [E1] “This informed in the down-selection of both algorithms and on-vehicle sensing hardware for HLG evaluation within Renault” allowing Renault “to demonstrate the initial HLG concept via a simulation based prototype, that included a real-time scene understanding module for object detection supplied by Durham, culminating in a successful presentation / demonstration to Renault-Nissan executives (2017). This resulted in strong positive feedback on the value to the end-user and an executive directive to accelerate development of the HLG concept within Renault.”[REDACTED], Renault [E1]

Subsequently, the Durham team co-developed a real-time software component to perform extended real-time object detection and attribute estimation [R1], coupled with low-cost dense stereo based object range estimation [R3] (Figure 1, top). This was supplied to Renault, under commercial contract with Durham, and installed onto on-vehicle GPU processing hardware with an integrated all-weather stereo camera rig, GPS and inertial measurement unit (Figure 2 – bottom middle/right). This enabled Renault to “construct and demonstrate an integrated on-vehicle HLG prototype, incorporating a scene understanding module for real-time object detection and range estimation supplied as a combined hardware and software solution by Durham (2018). This resulted in on-vehicle realisation of the HLG concept within Renault” [E1].

Embedded image

Figure 2: Exemplar on-vehicle sensor rigs used to support data collection and research under [R1-R6] in collaboration with Machines with Vision (left top/bottom), ZF Race Engineering (top middle), Jaguar Land Rover (top right) and Renault (bottom middle/right) – on test at Durham University.

This on-vehicle HLG prototype, constructed in collaboration with Durham, enabled Renault to “perform extended proof of concept evaluation of the HLG concept using an integrated on-vehicle HLG prototype with 60 different test drivers on open roads around a common pre-defined evaluation route in Versailles. This enabled extensive on-road experimentation whereby the test drivers where guided via HLG voice navigation commands that were automatically derived from a list of visible scene objects (type, position, attributes) provided in real-time from the Durham scene understanding module (2018/19).” [E1]

From this collaboration with Durham, “The resulting experimentation and analysis of the HLG concept, enabled by our collaborative research work, has directly informed the vehicle design and development process within Renault with HLG now being considered within the production design (post-research) phase of Renault vehicles. From the HLG prototyping exercise with 60 different test drivers on public roads, Renault has identified and ranked several uses cases with their associated customer value. In this way, Renault has built a road map for the integration of theses uses cases in accordance with this associated customer value. In addition, Renault has registered 2 patents on vocal ‘human like’ guidance sentences [patent: FR3078565A1, 2018]” in support of “autonomous driving technologies to be available in 15 Renault models by 2022 as part of our current ‘Drive the Future’ strategic plan” [E1].

  • Jaguar Land Rover’s (UK) development and evaluation of CAV in “collaboration with ... Durham [based on R2, R4, R6], … resulted in research that has informed our ability to evaluate 3D depth mapping ..., scene segmentation and ability for accurately describing scene features. This enabled us to utilise this .. in .. investigating more feature-rich ADAS (Advanced Driver Assistant Systems), and to push forward vehicle autonomy.” - [REDACTED], Jaguar Land Rover [E2]

This has enabled JLR to “increase our understanding on ... the usefulness of vision systems on vehicles ... and the impact that they can have in autonomy.” [E2] “The impact of this research … for driver assistance systems and vehicle autonomy, in both the off-road [R2, R4] and on-road [R6] environment has notably informed our internal research and development process for the range of .. options that may feature in production vehicles [patent: WO2018007079A1, 2016] ... and will have a large, positive impact ... towards autonomy.” (Fig. 2, top/right) [E2]

  • Machines with Vision’s (UK, Germany) development of a novel all-weather CAV localisation sensor in “collaboration … [with Durham] has enabled … on-road testing of early low-TRL versions of our localisation sensor [patent: US20190265038A1, 2016] ... benchmarking against visual stereo odometry” and facilitated development of “a refined higher-TRL [Technology Readiness Level] version of RoadLoc, our automotive-specific localisation sensor, in collaboration with both Durham and Jaguar Land Rover” [E3]. Based on the body of automotive research [R1-R6] underpinned by prior on-vehicle test and evaluation (see Figures 1 / 2 + [E3]), “the research and industrial experience of the Durham team within the automotive sector, has directly contributed to the successful award and … delivery of ... research contracts by Machines with Vision.” (additional income to company: GBP436,000 – [E3])

“Today, Machines with Vision… [has] a turnover of GBP547,789 (2020) and projected GBP1 million of revenue in 2021 ..., representing a significant growth from ... founding [in] 2016. … Without our research collaboration … [with] Durham we would never have had many of the insights and connections with the automotive sector, nor the ability to independently test and validate our sensor designs.” - [REDACTED] Machines with Vision [E3].

“Our ability to provide technical early-stage evidence of both on-vehicle testing and proof-of-concept benchmarking ..., in addition to the guidance on … sensor packaging / mounting / interfacing received from your team in the early days of our … localisation sensor, has been instrumental in contributing to ...” - approximately GBP600,000+ of additional company income [E3] and full commercialisation of a rail variant of the CAV localisation sensor . This will be fitted onto “all DB‎ (Deutsche Bahn, German Railways) measurement trains … by the end of 2019” [E4]. This task was completed and now provides localisation in areas of poor GPS coverage (e.g. tunnels, cuttings etc.) and improved 2cm localisation accuracy over the entire network at increased train operating speeds [E4]. “DB measurement trains” are specifically equipped track survey inspection trains that are used to routinely monitor track condition in order to ensure railway safety across the ~41,000km of track in Germany that carries ~2.6 billion passengers annually (2019, source: DB). This is in addition to ongoing work “with Network Rail (UK) on ... its measurement train (responsible for .. safety on all the UK’s high-speed lines)” that helps protect an additional ~1.7 billion passengers across ~16,000km of track in the UK.

  • ZF Race Engineering - Conekt Engineering Services (UK) collaborated with Durham to use stereo-based 3D scene mapping from [R3] to “develop and demonstrate 3D content generation from multiple cameras” and to “design and develop the 3D scene reconstruction from key-frame based photogrammetry”- [REDACTED] (ZF), [E5] . “This enabled the realisation of high resolution real-time 360° 3D information using low-cost camera sensing … for use in ... platform autonomy, ... path sensing and object avoidance” [E5] using data obtained from the multi-camera rig constructed at Durham (Figure 2, top middle). *“This technical work, coupled with the 3D computer vision research expertise at Durham, contributed to the successful delivery of [2 projects to the] Defence Science and Technology Laboratory [UK Ministry of Defence]” [E5] resulting in GBP164,000 of additional income to the company (2015/2016) and contributing to growth in the company patent portfolio in this area [patent: US20190182467A1, 2017].

Summary: Research from Durham [R1-R6] has enabled research and development at leading automotive manufacturers including patentable technology (France/UK: Renault, Jaguar Land Rover, ZF), enabled commercial sensor systems to be supplied to UK MoD (ZF) and supported technology translation into rail which now helps to protect ~57,000 km of track and ~4.3 billion passenger journeys annually (UK/Germany: Machines with Vision).

5. Sources to corroborate the impact

  1. Testimonial – Groupe Renault, [REDACTED] February, 2020.

  2. Testimonial – Jaguar Land Rover, [REDACTED] August 2019.

  3. Testimonial – Machines With Vision Ltd – [REDACTED], December 2020.

  4. Machines With Vision | Deutsche Bahn (DB) – website article (accessed, 14th Oct. 2019)

  5. Testimonial - ZF Race Eng. - [REDACTED] Aug. 2020

Showing impact case studies 1 to 3 of 3

Filter by higher education institution

UK regions
Select one or more of the following higher education institutions and then click Apply selected filters when you have finished.
No higher education institutions found.
Institutions

Filter by unit of assessment

Main panels
Select one or more of the following units of assessment and then click Apply selected filters when you have finished.
No unit of assessments found.
Units of assessment

Filter by continued case study

Select one or more of the following states and then click Apply selected filters when you have finished.

Filter by summary impact type

Select one or more of the following summary impact types and then click Apply selected filters when you have finished.

Filter by impact UK location

UK Countries
Select one or more of the following UK locations and then click Apply selected filters when you have finished.
No UK locations found.
Impact UK locations

Filter by impact global location

Continents
Select one or more of the following global locations and then click Apply selected filters when you have finished.
No global locations found.
Impact global locations

Filter by underpinning research subject

Subject areas
Select one or more of the following underpinning research subjects and then click Apply selected filters when you have finished.
No subjects found.
Underpinning research subjects