Time aggregation based lossless video encoding for neuromorphic vision sensor data
- Submitting institution
-
Kingston University
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 11-29-1361
- Type
- D - Journal article
- DOI
-
10.1109/JIOT.2020.3007866
- Title of journal
- IEEE Internet of Things Journal
- Article number
- -
- First page
- 596
- Volume
- 8
- Issue
- -
- ISSN
- 2327-4662
- Open access status
- Compliant
- Month of publication
- -
- Year of publication
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
-
- Research group(s)
-
-
- Citation count
- 0
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Dynamic Vision Sensors (DVS) have the potential to disrupt the area of Internet of Things, enabling the capture of a scene with a limited bit-rate. As part of the “Internet of Silicon Retinas (IoSiRe)” project funded by EPSRC, this paper introduces a totally novel approach for compressing DVS data by recognising the redundancy via a special organisation of the data and then exploiting the principles of video encoders to compress the “pseudo-video” sequence obtained. This research is significant as it is relevant to all applications requiring the transmission of DVS data, e.g., IoT and autonomous driving.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -