Abstract
Activity recognition applications is growing in importance due to two key factors: first there is increased need for more human assistance and surveillance; and second, increased availability of datasets and improved image recognition algorithms have allowed effective recognition of more sophisticated activities. In this paper we develop an activity recognition approach to support visually impaired people that leverages these advances. Specifically, our approach uses a dataset of videos taken from the point of view of a guide-dog to train a convolutional neural-network to recognize the activities taking place around the camera and provide feedback to a visually impaired human user. Our experiments show that our trained models surpass the current state-of-the-art for identifying activities in the doc-centric activity dataset.
Original language | English |
---|---|
Title of host publication | 2017 International Joint Conference on Neural Networks (IJCNN) |
Publisher | IEEE Explore |
Pages | 2267-2274 |
Number of pages | 8 |
DOIs | |
Publication status | Published - May 2017 |