Visual Shoreline Detection for Blind and Partially Sighted People
Linz, Austria, July 2018bib pdf slides
Currently existing navigation and guidance systems do not properly address special guidance aides, such as the widely used white cane. Therefore, we propose a novel shoreline location system that detects and tracks possible shorelines from a user's perspective in an urban scenario. Our approach uses three dimensional scene information acquired from a stereo camera and can potentially inform a user of available shorelines as well as obstacles that are blocking an otherwise clear shoreline path, and thus help in shorelining. We evaluate two different algorithmic approaches on two different datasets, showing promising results. We aim to improve a user's scene understanding by providing relevant scene information and to help in the creation of a mental map of nearby guidance tasks. This can be especially helpful in reaching the next available shoreline in yet unknown locations, e.g., at an intersection or a drive-way. Also, knowledge of available shorelines can be integrated into routing and guidance systems and vice versa.
Mind the Gap: Virtual Shorelines for Blind and Partially Sighted People
International Conference on Computer Vision Workshop (ICCV) on Assistive Computer Vision and Robotics (ACVR)
Venice, Italy, October 2017bib pdf poster slides
Blind and partially sighted people have encountered numerous devices to improve their mobility and orientation, yet most still rely on traditional techniques, such as the white cane or a guide dog. In this paper, we consider improving the actual orientation process through the creation of routes that are better suited towards specific needs. More precisely, this work focuses on routing for blind and partially sighted people on a shoreline like level of detail, modeled after real world white cane usage. Our system is able to create such fine-grained routes through the extraction of routing features from openly available geolocation data, e.g., building facades and road crossings. More importantly, the generated routes provide a measurable safety benefit, as they reduce the number of unmarked pedestrian crossings and try to utilize much more accessible alternatives. Our evaluation shows that such a fine-grained routing can improve users' safety and improve their understanding of the environment lying ahead, especially the upcoming route and its impediments.
Using Technology Developed for Autonomous Cars to Help Navigate Blind People
International Conference on Computer Vision Workshop (ICCV) on Assistive Computer Vision and Robotics (ACVR)
Venice, Italy, October 2017bib pdf
Autonomous driving is currently a very active research area with virtually all automotive manufacturers competing to bring the first autonomous car to the market. This race leads to billions of dollars being invested in the development of novel sensors, processing platforms, and algorithms. In this paper, we explore the synergies between the challenges in self-driving technology and development of navigation aids for blind people. We aim to leverage the recently emerged methods for self-driving cars, and use it to develop assistive technology for the visually impaired. In particular we focus on the task of perceiving the environment in real-time from cameras. First, we review current developments in embedded platforms for real-time computation as well as current algorithms for image processing, obstacle segmentation and classification. Then, as a proof-of-concept, we build an obstacle avoidance system for blind people that is based on a hardware platform used in the automotive industry. To perceive the environment, we adapt an implementation of the stixels algorithm, designed for self-driving cars. We discuss the challenges and modifications required for such an application domain transfer. Finally, to show its usability in practice, we conduct and evaluate a user study with six blindfolded people.
Zebra Crossing Detection from Aerial Imagery Across Countries
Linz, Austria, July 2016bib pdf slides
We propose a data driven approach to detect zebra crossings in aerial imagery. The system automatically learns an appearance model from available geospatial data for an examined region. HOG as well as LBPH features, in combination with a SVM, yield state of the art detection results on different datasets. We also use this classifier across datasets obtained from different countries, to facilitate detections without requiring any additional geospatial data for that specific region. The approach is capable of searching for further, yet uncharted, zebra crossings in the data. Information gained from this work can be used to generate new zebra crossing databases or improve existing ones, which are especially useful in navigational assistance systems for visually impaired people. We show the usefulness of the proposed approach and plan to use this research as part of a larger guidance system.
Way to Go! Detecting Open Areas Ahead of a Walking Person
European Conference on Computer Vision Workshop (ECCV) on Assistive Computer Vision and Robotics (ACVR)
Zurich, Switzerland, September 2014bib pdf
We determine the region in front of a walking person that is not blocked by obstacles. This is an important task when trying to assist visually impaired people or navigate autonomous robots in urban environments. We use conditional random fields to learn how to interpret texture and depth information for their accessibility. We demonstrate the effectiveness of the proposed approach on a novel dataset, which consists of urban outdoor and indoor scenes that were recorded with a handheld stereo camera.
Cognitive Evaluation of Haptic and Audio Feedback in Short Range Navigation Tasks
Paris, France, July 2014bib pdf slides
Assistive navigation systems for the blind commonly use speech to convey directions to their users. However, this is problematic for short range navigation systems that need to provide fine but diligent guidance in order to avoid obstacles. For this task, we have compared haptic and audio feedback systems under the NASA-TLX protocol to analyze the additional cognitive load that they place on users. Both systems are able to guide the users through a test obstacle course. However, for white cane users, auditory feedback results in a 22 times higher cognitive load than haptic feedback. This discrepancy in cognitive load was not found on blindfolded users, thus we argue against evaluating navigation systems solely with blindfolded users.
Accessible Section Detection for Visual Guidance
IEEE Workshop on Multimodal and Alternative Perception for Visually Impaired People (MAP4VIP) In Conjunction with International Conference on Multimedia and Expo (ICME)
San Jose, USA, June 2013bib pdf slides bvs bvs-modules
We address the problem of determining the accessible section in front of a walking person. In our definition, the accessible section is the spatial region that is not blocked by obstacles. For this purpose, we use gradients to calculate surface normals on the depth map and subsequently determine the accessible section using these surface normals. We demonstrate the effectiveness of the proposed approach on a novel, challenging dataset. The dataset consists of urban outdoor and indoor scenes that were recorded with a handheld stereo camera.
A Guidance and Obstacle Evasion Software Framework for Visually Impaired People
Information about the environment is desired in several applications, for example autonomous robots and support systems for visually impaired persons. Like with most scenarios where a human being uses a support system, reliability is of utmost importance. This creates a high demand for performance and robustness in real-world settings. Many systems created towards this purpose cannot cope with constraints such as platforms with a large amount of uncontrolled ego-motion and the need for real-time processing of information and are thus not feasible for this specific situation.
The topic of this thesis is a novel framework to create vision based support systems for visually impaired persons. It consists of a modular, easily extendable and highly agile software system. Furthermore, a ground detection system is created to aid in mobile navigation scenarios. The system calculates the accessible section by relying on the assumption that the orientation of a given plane segment can be calculated using a stereo camera reconstruction process.
Many frameworks have been created to simplify the developing process of large and complex systems and to foster collaboration among researchers. Usually, such frameworks would be created towards a certain purpose, for example a robotic application. In such a scenario, many elements are needed to manage the components of the robotic platform, such as motor controls. This creates dependencies on the availability of specific building blocks and induce great overhead if such components are not needed. Thus, the created framework imposes no restrictions on its use case by moving such functionality into modular components.
In computer vision many features and algorithms to detect ground plane exist. Some of these are quite costly to calculate, for example segmentation based algorithms. Others use a RANSAC based approach that shows problems in situations where the existing ground plane only accounts for a small part of the examined input data. To alleviate these problems a simple, yet robust, feature is proposed which consists of a gradient detection in the stereo reconstruction data. The gradient of a region in the disparity map correlates directly with the orientation of a surface in the real world. Since the gradient calculation is not complex, a fast and reliable computation of the accessible section becomes possible.
To evaluate the proposed ground detection system, a dataset was created. This dataset consists of 20 videos recorded with a hand held camera rig and contains a high degree of camera ego-motion to simulate a system worn by a pedestrian. The accessible section detection based on the gradient calculation shows promising results.
A dataset for accessible section detection and obstacle avoidance, recorded with a handheld stereo camera rig subject to strong ego-motion, 20 videos of varying length covering common urban scenes.flowerbox.zip (2GB)
Hiwi-Jobs, Bachelor/Master Thesis
- Navigation Systems for the Visually Impaired [pdf]
- Projektpraktikum Computer Vision für Mensch-Maschine-Interaktion (SS2013, SS2014[HP], SS2015, SS2016, SS2017, SS2018)
- Seminar Assistive Technologien für Sehgeschädigte (SS2014, SS2015, WS2015/16, WS2016/17, WS2017/18)
- Seminar Computer Vision for Human-Computer Interaction (WS2016/17, WS2017/18)
Projects and Awards
- 2016/07-2019/06 TERRAIN: “Selbstständige Mobilität blinder und sehbehinderter Menschen im urbanen Raum durch audio-taktile Navigation” (KIT News DE|EN)
- 2016/07 Best Praktikum Award for “Computer-Vision for Human-Computer Interaction” Praktikum in SS2015 (KIT News)
- 2014/02-2017/01 “AVVIS: Artificial Vision for Assisting Visually Impaired in Social Interaction”
- 2013/08 Google Research Award: “A Mobility and Navigational Aid for Visually Impaired Persons” (KIT News)
- 2018 “Entwicklung eines Assistenzsystems zur sicheren Überquerung von Straßenübergängen für Menschen mit Sehschädigung” (MA)
- 2018 “Tiefenbasierte Leitlinienerkennung im urbanen Raum für Menschen mit Sehschädigung” (MA)
- 2017 “Virtuelle Leitliniengenerierung für Menschen mit Sehschädigung im urbanen Raum” (BA)
- 2016 “Zebra-Crossing Detection for the Visually Impaired” (DA)
- 2015 “Detector Evaluation for Pedestrian Crosswalk Guidance Systems” (MA)
- 2014 “Barcode Detection Using the Modified Census Transform” (SA)
- 2014 “Erweiterung eines Computer-Vision Frameworks zur Entwicklung für Android” (SA)