Skip to main

Impact case study database

The impact case study database allows you to browse and search for impact case studies submitted to the REF 2021. Use the search and filters below to find the impact case studies you are looking for.
Waiting for server

OxSight: Developing smart glasses to assist the visually impaired

1. Summary of the impact

In the UK there are 300,000 legally blind individuals and over 40,000,000 worldwide, yet 90% of these have some remaining residual sight. As a result of 3 years of research at the University of Oxford, Prof. Philip Torr (Department of Engineering Science) and Dr. Steve Hicks (Nuffield Department of Clinical Neurosciences) collaborated to create smart glasses, non-invasive wearable devices that use intelligent image interpretation technology to allow those who are legally blind to use their remaining residual vision to see again. OxSight Ltd was spun out to commercialise the smart glasses. OxSight’s current valuation is GBP40,000,000 and it employs 43 members of staff. In 2018, the CE-certified PRISM and CRYSTAL models entered the UK, EU, Chinese, and South American markets aimed at the visually impaired suffering from tunnel vision. To date, [text removed for publication] units have been sold totalling [text removed for publication]. Testimonials from users have demonstrated how these have positively impacted their lives, giving them back confidence, freedom, and improving their quality of life. A third product, ONYX, is currently being trialled for central vision loss, and, like the existing commercial products, is receiving positive feedback from those involved in the trial: 100% of users reported improved facial detection and 75% could read better with the product.

Computer vision software developed in the UoA that underpins these glasses is also present in a number of Huawei’s flagship products, such as the Huawei Mate10 and the Huawei Honor V10, to enhance photo capabilities. In 2018, Huawei was the 5th most used smartphone brand, and there were over 17,000,000 Huawei Mate 10 series phones (5.9% of all Huawei products) active in the world.

2. Underpinning research

Prof. Torr’s research focuses on the mathematical theory of computer vision and artificial intelligence, and the development of algorithms and software for intelligent image interpretation. A particular emphasis at the University of Oxford has been on computer tools which allow for real-time, interactive understanding of the visual world, especially three-dimensional reconstruction and learning of scenes, segmentation of images, and object motion tracking, all of which can be implemented in mobile cameras such as those on phones, drones, intelligent glasses for the visually impaired or other robot devices.

The tools developed by Prof. Torr and his group address key problems in computer vision. Most computer vision algorithms learn from image sets that have been labelled and segmented by human observers, but human labelling is time-consuming and costly. ‘Weakly supervised learning’ removes this dependence; computer vision and graphics applications are enhanced by software which can automatically estimate the salient regions in images (the most useful and informative areas for a human viewer) without any prior assumption or knowledge of the contents of those images. The research at [ R1] (conducted in collaboration with researchers at other universities in China, the UK and the USA) implemented a regional contrast-based salient object detection algorithm capable of producing full-resolution, high-quality saliency maps which represent the image in a way that is easier for the computer to analyse. The algorithm consistently outperformed 15 other methods, even when analysing ‘noisy’ internet images where the saliency regions are not clear. This work has been used extensively by many other researchers and companies as the basis for learning weakly supervised object recognition.

The group has also worked extensively on pixel-level labelling tasks, such as semantic segmentation, where each pixel of an image is labelled with a corresponding class of what is being represented (person, table, background etc). Deep learning techniques are limited in their ability to delineate visual objects in this way, but Prof. Torr’s insight was to introduce a system which combined the strengths of two separate computer vision tools, Convolutional Neural Networks and Conditional Random Fields [ R2]. This was the first work to combine deep models with Markov random fields. When applied to the challenging Pascal VOC 2012 segmentation benchmark (a standard image dataset for building and evaluating this type of algorithm), the new model outperformed competition from tech giants such as Google and Microsoft.

In [ R3], Prof. Torr and colleagues approached the problem of object tracking in video footage by equipping a basic tracking algorithm with a deep learning tool: a novel Siamese (twin) neural network that had been trained for object detection in video. Despite its simplicity, the tracker was capable of operating at frame-rates beyond real-time. This work represented a paradigm shift away from the previous traditional tracking methods to those based on deep learning and is much cited by subsequent work on tracking.

Prof. Torr’s expertise in the field of computer vision led in 2015 to a research collaboration with Dr. Hicks, a neuroscientist at Oxford’s Nuffield Department of Clinical Neurosciences. Dr. Hicks had built a prototype visor to investigate visual perception and depth processing in low vision, which established a proof of principle design for smart glasses for visually impaired people. The collaboration drew on Prof. Torr’s extensive research into deep learning, semantic segmentation and visual object tracking, which enabled him to envisage what was possible (and what was not), and then develop specific relevant computer vision tools. These reduce a complex visual scene to its task-relevant elements, allowing the glasses to extract meaning from the environment and enhancing the spatial awareness of their users. [ R4, R5 and R6] were the product of this joint research.

[ R4] set a new benchmark in the real-time tracking of visual objects in video by framing it as a machine learning problem and examining how visual adaptation could be performed continually. It has also influenced a large body of subsequent work. [ R5] introduces SemanticPaint, a real-time, interactive system for the geometric reconstruction, object-class segmentation and learning of 3D scenes. The user interacts physically with the real-world scene, touching objects and using voice commands to assign them appropriate labels. These labels feed into an online machine learning algorithm, which then predicts labels for previously unseen parts of the scene. [ R6] describes an augmented reality system for large-scale 3D reconstruction and recognition in outdoor scenes. As well as producing a map of the 3D environment in real-time, the system allows the user to draw with a laser pointer directly onto the reconstruction to segment the model into objects. The system then learns to segment other parts of the 3D map. This novel object labelling system can be implemented in smart glasses to help visually impaired users to navigate safely through spaces by identifying semantic classes of objects, such as the difference between the footpath and the road.

3. References to the research

[ R1] Cheng, M. M., Mitra, N. J., Huang, X., Torr, P. H. S., & Hu, S. M. (2015). Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), pp. 569-582. doi: 10.1109/TPAMI.2014.2345401 (Journal article)

[ R2] Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr, P. H. S. (2015). Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision Vol. 2015 International Conference on Computer Vision, ICCV 2015, pp. 1529-1537 doi: 10.1109/ICCV.2015.179 (Conference item)

[ R3] Bertinetto, L., Valmadre, J., Henriques, J. F., Vedaldi, A., & Torr, P. H. S. (2016). Fully convolutional siamese networks for object tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 9914 LNCS, pp. 850-865. doi: 10.1007/978-3-319-48881-3_56 (Conference item)

[ R4] Sam Hare, Stuart Golodetz, Amir Saffari, Vibhav Vineet, Ming-Ming Cheng, Stephen Hicks, Philip H.S. Torr, "Struck: Structured Output Tracking with Kernels," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp. 2096-2109, 1 Oct. 2016, doi: 10.1109/TPAMI.2015.2509974 (Journal article)

[ R5] S Stuart Golodetz, Michael Sapienza, Julien P. C. Valentin, Vibhav Vineet, Ming-Ming Cheng, Victor A. Prisacariu, Olaf Kähler, Carl Yuheng Ren, Anurag Arnab, Stephen L. Hicks, David W. Murray, Shahram Izadi, and Philip H. S. Torr. 2015. “SemanticPaint: interactive segmentation and learning of 3D worlds”. In ACM SIGGRAPH 2015 Emerging Technologies (SIGGRAPH ’15). Association for Computing Machinery, New York, NY, USA, Article 22, 1. doi: 10.1145/2782782.2792488 (Conference item)

[ R6] Ondrej Miksik, Vibhav Vineet, Morten Lidegaard, Ram Prasaath, Matthias Nießner, Stuart Golodetz, Stephen L. Hicks, Patrick Pérez, Shahram Izadi, and Philip H.S. Torr. 2015. “The Semantic Paintbrush: Interactive 3D Mapping and Recognition in Large Outdoor Spaces”. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). Association for Computing Machinery, New York, NY, USA, pp. 3317–3326. doi: 10.1145/2702123.2702222 (Conference item)

4. Details of the impact

Prof. Torr’s expertise in the field of computer vision and AI, described in section 2, led to his collaboration with Dr Hicks, a neuroscientist at Oxford’s Nuffield Department of Clinical Neurosciences. Their joint work [ R4, R5 and R6] led to the creation of non-invasive wearable glasses with algorithms that allow the glasses to make sense of their environment and exploit the remaining residual vision of the user to allow them to see again. These devices are granting users independence and freedoms previously unavailable to them. To further increase impact OxSight Ltd. was spun out to commercialise their products PRISM, CRYSTAL (for users with tunnel vision), and forthcoming ONYX (for users with central vision loss) and now has offices around the globe to reach more people with sight loss.

Health Impact – improving quality of life and granting more freedom to those suffering sight loss

The glasses, in their various iterations from smart glasses to the commercial products PRISM and CRYSTAL, have had a dramatic effect on improving the quality of life of the partially sighted. The earliest versions of “Smart Glasses” in the 200-person Royal National Institute of Blind People (RNIB) trial from 2015-2016 demonstrated the positive impact on quality of life in a controlled environment and take-home tests. The “Smart Glasses” were the result of research into low-cost and non-invasive wearable technologies using an augmented reality display system based on depth cameras and see-through displays to enhance sight for obstacle avoidance, face recognition and object recognition. 66% of the 2015 RNIB trial cohort found the Smart Glasses were beneficial in a range of environments. The study found that up to 50% of legally blind people with conditions such as glaucoma and retinitis pigmentosa could get a direct and immediate benefit from vision enhancement. [ S1] Those involved the trial gave positive testimonials:

“When having a one-to-one conversation with my partner across the dining room, I was able to see facial features, glasses and earrings for the first time in years. ... It provided vision I have not seen in years, actually recognizing facial features”.

“[the Smart Glasses] give me a lot of confidence. I can recognise faces which I couldn't before. I am able to recognise numbers, so this helps me with getting on the correct bus. Pedestrian crossings - I can see the lights change brilliant.”

“The last few weeks have changed my social life completely. I have been out more than I have in the last few years. I can actually see people’s faces, in this way it is easier to talk to them and is definitely a rewarding experience.” [ S1]

With the success of the RNIB trial, in 2016 OxSight was spun out from the University of Oxford to research and produce more advanced commercial products. The OxSight PRISM and CRYSTAL glasses are a result of this further development and offer improved image enhancement features over the earlier models; see images below:

Embedded image

Embedded image

Embedded image

Embedded image

These glasses continue to allow users who are registered as blind the ability to see again and gain more freedom in everyday life by increasing users’ field of vision from 5-10 degrees to 68 degrees. [ S2] Testimonials from users demonstrate these freedoms:

“Beyond my schoolwork, I use the glasses in my free time, say if I was watching TV or playing a console. I find using them watching TV 10 times more easier than using them without watching TV because it makes me able to see the whole screen…I’m not sure how I would live without the OXSIGHT CRYSTAL in my life…it’s just freedom to be able to see properly and how wonderful it is to have my vision back”

“When I go cycling I take the glasses with me, so that if we stop to see a particular view, I can put them on…and see what my husband is seeing…a building in the distance or flowers…without CRYSTAL glasses I would have fewer opportunities in my life.”

“It’s got enormous potential because already a lot of people are finding – and people with much worse vision [than me] – that they are going from being able to see almost nothing to see really quite a lot. It’s quite life-changing in that respect.” [ S2]

In 2019 OxSight products PRISM and CRYSTAL were awarded the “Tried and Tested” certification by the RNIB – the first wearable technology to be awarded this accolade. The presence of the RNIB “Tried and Tested” logo on OxSight’s products and websites indicates to blind and partially sighted people that OxSight’s products are accessible and usable; it demonstrates OxSight’s commitment to inclusivity and enables customers to make an informed choice. The certification also validates that the glasses have been rigorously tested against RNIB guidelines. [ S3]

A third product, ONYX, is being trialled specifically targeting central vision loss, commonly attributed to macular conditions. However, sales of the new products have been disrupted by COVID-19. Despite the disruption, the CEO of OxSight stated: “The feedback we’ve received is that ONYX is life-changing and gives people back their independence. To people with degenerative eye disease, that is invaluable.” Further to this, CEO of the Macular Society noted: “Recently, our members have been trialling OxSight’s new smart glasses – ONYX – and we have had positive feedback about their potential to make a difference to the everyday lives of those living with sight loss.” In these macular trials, 100% of users reported that facial detection was improved, and 75% said they could read better whilst wearing them. [ S4]

Economic Impact – the creation of OxSight and product sales

The spin-out company OxSight Ltd. was created in 2016 to further research into using wearable technologies to improve people’s lives as a result of the success of the (RNIB) “Smart Glasses” clinical trial. The company is currently valued at GBP40,000,000 and has raised GBP7,200,000 in investment, with 43 staff (headcount: 43) in offices in Oxford, London, Hangzhou, and now India. By 31 July 2020 [text removed for publication] units had been sold: [text removed for publication]. Revenue from the sales of the glasses constitute [text removed for publication]. [ S5] The first two CE certified commercial products PRISM and CRYSTAL were released in 2018 into EU, Asian, and South American markets aimed at those with tunnel vision. [ S6]

Impact on product design - algorithms in Huawei flagship smartphones commercially available

Beyond OxSight, Prof. Torr’s open source algorithms in computer vision and AI [ R1] have been implemented in Huawei’s Mate 10 flagship phone and the Honor V10 to demonstrate “AI Selfie: Brilliant Bokeh, perfect portraits” which enhances the photos taken by the cameras in these phones. The research website set up to distribute the open source code and report findings has been visited 17,923 times. [ S7] As of 2018, Prof. Torr’s algorithms have featured in more than 17,000,000 active phones from across the Huawei Mate 10 series (Lite, Mate, and Pro models, accounting for 5.9% of all Huawei smartphones sold in 2018). In 2018 Huawei was the fifth most used smartphone brand in the world, with a market share of 8.5%, totalling 288,200,000 active devices in the world and 43.6% of the Chinese smartphone market. [ S8]

5. Sources to corroborate the impact

[ S1] The summary, final report, and data analysis of the 200-person trial in collaboration with RNIB in 2015 which includes user testimonials (2016)

[ S2] OxSight glasses user testimonials from OxSight ambassadors (two YouTube videos, 2018) and independent reviewer (newspaper article, 2019) testifying to the improvements in quality of life.

[ S3] OxSight website news piece evidencing being awarded the “Tried and Tested” certificate by RNIB for OxSight products (2019)

[ S4] Optometry Today article corroborating the positive reception of the early trials of the third forthcoming ONYX product aimed at central vision loss by OxSight CEO and Macular Society CEO. (2020)

[ S5] Email correspondence with OxSight CEO confirming the details of sales and the Companies House financials (2020-03)

[ S6] News item from the OxSight website corroborating CE certification (2018)

[ S7] Website for Prof. Torr’s research evidencing Torr’s algorithms being utilised in Huawei’s phones, the open source code, and a demonstration: (http://mmcheng.net/dss/\) and published journal article: Hou, et al. "Deeply Supervised Salient Object Detection with Short Connections," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 4, pp. 815-828, 2019, doi: 10.1109/TPAMI.2018.2815688.

[ S8] Newzoo website data corroborating the number of Huawei Mate10 devices in the market (2018), and Huawei Report “Huawei's Annual Smartphone Shipments Exceed 200 Million Units, a New All-Time High”. (2018-12)

Additional contextual information