Virtual guide dog: An application to support visually-impaired people through deep convolutional neural networks

Juarez Monteiro, João Paulo Aires, Roger Granada, Rodrigo C Barros, Felipe Meneguzzi

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

16 Citations (Scopus)

Abstract

Activity recognition applications is growing in importance due to two key factors: first there is increased need for more human assistance and surveillance; and second, increased availability of datasets and improved image recognition algorithms have allowed effective recognition of more sophisticated activities. In this paper we develop an activity recognition approach to support visually impaired people that leverages these advances. Specifically, our approach uses a dataset of videos taken from the point of view of a guide-dog to train a convolutional neural-network to recognize the activities taking place around the camera and provide feedback to a visually impaired human user. Our experiments show that our trained models surpass the current state-of-the-art for identifying activities in the doc-centric activity dataset.
Original languageEnglish
Title of host publication2017 International Joint Conference on Neural Networks (IJCNN)
PublisherIEEE Explore
Pages2267-2274
Number of pages8
DOIs
Publication statusPublished - May 2017

Bibliographical note

ACKNOWLEDGEMENT: This paper was achieved in cooperation with HP Brasil Indústria e Comércio de Equipamentos Eletrônicos LTDA. using incentives of Brazilian Informatics Law (Law n° 8.2.48 of 1991). The authors would also like to thank FAPERGS, CAPES, and CNPq for funding this research.

Fingerprint

Dive into the research topics of 'Virtual guide dog: An application to support visually-impaired people through deep convolutional neural networks'. Together they form a unique fingerprint.

Cite this