Visual navigation using a webcam based on semantic segmentation for indoor robots

Miho Adachi, Sara Shatari, Ryusuke Miyamoto

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The realization of a moving robot that can autonomously work in an actual environment has become important. A three-dimensional dense map that was created using three-dimensional (3D) depth sensors, such as light detection and ranging (LiDAR), is popular in the research field of autonomous moving robots. However, this approach has a few disadvantages: the price of 3D sensing devices and the robustness of localization in practical scenarios with many movable obstacles. To solve this problem, this paper proposes a vision-based navigation scheme that enables autonomous movement in indoor scenes; only a webcam is used as an external sensor. The experimental results from an experiment conducted in a university building demonstrated that a robot can move around on a floor.

Original languageEnglish
Title of host publicationProceedings - 15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019
EditorsKokou Yetongnon, Albert Dipanda, Gabriella Sanniti di Baja, Luigi Gallo, Richard Chbeir
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages15-21
Number of pages7
ISBN (Electronic)9781728156866
DOIs
Publication statusPublished - Nov 2019
Event15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019 - Sorrento, Italy
Duration: 26 Nov 201929 Nov 2019

Publication series

NameProceedings - 15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019

Conference

Conference15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019
CountryItaly
CitySorrento
Period26/11/1929/11/19

Keywords

  • Obstacle avoidance
  • Road following
  • Semantic segmentation
  • Visual navigation

Fingerprint Dive into the research topics of 'Visual navigation using a webcam based on semantic segmentation for indoor robots'. Together they form a unique fingerprint.

  • Cite this

    Adachi, M., Shatari, S., & Miyamoto, R. (2019). Visual navigation using a webcam based on semantic segmentation for indoor robots. In K. Yetongnon, A. Dipanda, G. Sanniti di Baja, L. Gallo, & R. Chbeir (Eds.), Proceedings - 15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019 (pp. 15-21). [9067961] (Proceedings - 15th International Conference on Signal Image Technology and Internet Based Systems, SISITS 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SITIS.2019.00015