Stella Yu : Publications / Google Scholar

BatVision: Learning to See 3D Spatial Layout with Two Ears
Jesper Haahr Christensen and Sascha Hornauer and Stella X. Yu
IEEE International Conference on Robotics and Automation, Paris, France, 31 May - 4 June 2020
Paper | Slides | Code

Abstract

Many species have evolved advanced non-visual perception while artificial systems fall behind. Radar and ultrasound complement camera-based vision but they are often too costly and complex to set up for very limited information gain. In nature, sound is used effectively by bats, dolphins, whales, and humans for navigation and communication. However, it is unclear how to best harness sound for machine perception.

Inspired by bats' echolocation mechanism, we design a low-cost {\it BatVision} system that is capable of seeing the 3D spatial layout of space ahead by just listening with two ears. Our system emits short chirps from a speaker and records returning echoes through microphones in an artificial human pinnae pair. During training, we additionally use a stereo camera to capture color images for calculating scene depths. We train a model to predict depth maps and even grayscale images from the sound alone. During testing, our trained BatVision provides surprisingly good predictions of 2D visual scenes from two 1D audio signals. Such a sound to vision system would benefit robot navigation and machine vision, especially in low-light or no-light conditions. Our code and data are publicly available.


Keywords
binaural echolocation, sound to vision, depth reconstruction