Student Projects

Self-supervised Domain Adaptation for Sensor Data

Type: Master UROP
Status: Available

Wearable devices, combined with Artificial Intelligence (AI) methods, can bring significant and sustainable improvements to our lives – from improved patient monitoring and decreased healthcare costs to enhanced sports performance and improved quality of life. Standard approaches involve Machine Learning (ML) techniques applied to the data captured from body-worn sensing devices. The ML techniques can be based on classical (feature-based) ML, or Deep Learning (DL) applied on the raw sensor data (end-to-end learning). A typical weakness that all ML-based HAR systems have, regardless of whether they are classical or DL-based, is the domain shift that can be caused by different sensor placements [1, 2].

This project will explore personalization and domain-adaptation techniques to address important challenges in wearable computing: noisy data, limited data, and domain shifts in the labels and the sensor data due to subjectivity. ML processing pipelines (including deep learning techniques) will be augmented with the latest unsupervised and self-supervised learning techniques, including contrastive learning [3]. These advanced techniques should produce more robust and data-efficient models (i.e., requiring fewer person-specific labels). Diffusion-based approaches [4, 5] could also be considered.

Project tasks:
1. Overview of existing self-supervised learning approaches [6, 7, 8, 9, 10].
2. Pre-process one dataset from wearable sensing systems. Example datasets include emotion recognition, activity recognition and energy expenditure estimation [11, 12, 13]
3. Build baseline ML models using the dataset from step 2.
4. Develop self-supervised ML approach and compare self-supervised models with the baseline ML models from step 3.

This project is available as a MSc thesis and as a UROP internship project.

Literature:

[1] Gjoreski, M.; Gjoreski, H.; Luštrek, M.; Gams, M. How Accurately Can Your Wrist Device Recognize Daily Activities and Detect Falls? Sensors 2016, 16, 800. https://doi.org/10.3390/s16060800.
[2] Kalabakov, S.; Stankoski, S.; Kiprijanovska, I.; Andova, A.; Rešˇciˇc, N.; Janko, V.; Gjoreski, M.; Gams, M.; Luštrek, M. What Actually Works for Activity Recognition in Scenarios with Significant Domain Shift: Lessons Learned from the 2019 and 2020 Sussex-Huawei Challenges. Sensors 2022, 22, 3613. https://doi.org/10.3390/s22103613.
[3] Haresamudram, H., Essa, I., & Plötz, T. (2021). Contrastive predictive coding for human activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(2), 1-26.
[4] K. Rasul, C. Seward, I. Schuster, and R. Vollgraf. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning, 2021.
[5] L. Yang, Z. Zhang, Y. Song, S. Hong, R. Xu, Y. Zhao, Y. Shao, W. Zhang, B. Cui, and M.-H. Yang. “Diffusion models: A comprehensive survey of methods and applications.” arXiv preprint arXiv:2209.00796 (2022).
[6] J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch et al. “Bootstrap your own latent-a new approach to self-supervised learning.” Advances in neural information processing systems 33 (2020): 21271-21284.
[7] D. Wei, J. J. Lim, A. Zisserman, and W. T. Freeman. “Learning and using the arrow of time.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8052-8060. 2018.
[8] C. Tong, J. Ge, and N.D. Lane. “Zero-Shot Learning for IMU-Based Activity Recognition Using Video Embeddings.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 4 (2021): 1-23.
[9] H. Kwon, C. Tong, H, Haresamudram, Y. Gao, G. D. Abowd, N.D. Lane, and T. Ploetz. “IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, no. 3, 2020.
[10] Y. Jain, C. I. Tang, C. Min, F. Kawsar, and A. Mathur. “ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 1 (2022): 1-28.
[11] M. Laporte, D. Gasparini, M. Gjoreski, and M. Langheinrich. “Exploring LAUREATE-the Longitudinal multimodAl stUdent expeRience datasEt for AffecT and mEmory research.” in the UbiComp/ISWC’22 Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2022.
[12] Gashi, S., Min, C., Montanari, A., Santini, S., & Kawsar, F. (2022). A multidevice and multimodal dataset for human energy expenditure estimation using wearable devices. Scientific Data, 9(1), 1-14.
[13] Gjoreski, M., Kiprijanovska, I., Stankoski, S., Mavridou, I., Broulidakis, M. J., Gjoreski, H., & Nduka, C. (2022). Facial EMG sensing for monitoring affect using a wearable device. Scientific reports, 12(1), 1-12.

For more information contact: Martin Gjoreski