Student Projects

Activity recognition using ear-worn sensors and machine learning

Status: Completed September 2022
Student: Davide Casnici

Working toward context recognition for human memory augmentation systems, we developed Human Activity Recognition (HAR) machine learning pipeline using data from earable devices. In this paper, we analyze how the earables can be used to detect different types of verbal (e.g. speaking) and non-verbal (e.g. head shaking) interactions and other activities (e.g. standing still). We collected a dataset of ear-located IMU sensors from 30 participants and compared classical machine learning methods with state-of-the-art deep learning models to classify the raw data into a set of predefined activities. We explored how different parameters — including sampling frequency, window size, and type of sensors — influence the models’ performance. The best-performing model in the leave-one-subject-out evaluation, a spectro-temporal residual network (STResNet), achieved an F1-score of 0.69 (in recognising seven different activities: nodding, speaking, eating, staying, head shaking, walking, and walking while speaking.



For more information contact: Matías Laporte