Deep Neural Sensing for Mobile Health and Safety

Breathing biomarkers, such as breathing rate, fractional inspiratory time, and inhalation-exhalation ratio, are vital for monitoring the user’s health and well-being. We assesses the potential of using smartphone acoustic sensors for passive unguided breathing phase monitoring in a natural environment. We address the annotation challenges by developing a novel variant of the teacher-student deep neural network training method for transferring knowledge from an inertial sensor to an acoustic sensor, eliminating the need for manual breathing sound annotation by fusing signal processing with deep learning techniques.

We propose PAWS/SEUS, which is a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars.

We study the overhearing problem of continuous acoustic sensing devices such as Amazon Echo and Google Home, and develop a smart cover that mitigates personal or contextual information leakage due to the presence of unwanted sound sources in the acoustic environment.

Publications

BreathTrack: Detecting Regular Breathing Phases from Unannotated Acoustic Data Captured by a Smartphone, IMWUT/UBICOMP ‘21

Paws: A Wearable Acoustic System for Pedestrian Safety, IoTDI ‘18

Improving Pedestrian Safety in Cities using Intelligent Wearable Systems, IoTJ ‘19

A Smartphone-Based System for Improving Pedestrian Safety, VNC ‘18

SoundSifter: Mitigating Overhearing of Continuous Listening Devices, MobiSys ‘17

Demo Abstract: SEUS: A Wearable Multi-Channel Acoustic Headset Platform to Improve Pedestrian Safety, SenSys ‘17