Student Projects

Multimodal Federated Learning for Sensor Data

Type: Bachelor Master UROP
Status: Available

Federated learning and its combination with differential privacy is the latest technique for building privacy-aware machine-learning models [1]. Its primary assumption – no data leaves the local data storage, has enabled its application in a variety of privacy-sensitive domains: mobile keyboard prediction [2], human mobility modeling based on GPS data [3], modeling from electronic health records [4] , etc.

This project will investigate single modality vs. multi-modality federated models. This is an important issue for wearable sensing systems that utilize multiple sensing devices, e.g., smartphone and smartwatch. Each device, and each sensor in the devices, may have a different availability –– data coming from the smartwatch may be unavailable at certain periods (e.g., while charging). To enable the collaborative learning of joint models between users with a variable data/modality availability, we will investigate several multi-modal schemes.

Project tasks:
1. Pre-process one dataset from wearable sensing systems. Example datasets include emotion recognition, activity recognition and energy expenditure estimation [6, 7, 8].
2. Build centralized multimodal and single-modal models using the dataset from step 1.
3. Build federated multimodal and single-modal models using the dataset from step 1 and compare them with the centralized models from 2.
4. Develop novel multi-modal federated learning method considering device/sensor availability, computational cost, model accuracy.

This project is available as a MSc thesis (all four tasks), as a BSs thesis (only task 2 and task 3), or as a UROP internship project (all 4 tasks).


[1] Konečný, Jakub, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. “Federated learning: Strategies for improving communication efficiency.” arXiv preprint arXiv:1610.05492 (2016).
[2] Andrew Hard, Kanishka Rao, Rajiv Mathews, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. CoRR, abs/1811.03604, 2018. URL 1811.03604.
[3] Ezequiel, C. E. J., Gjoreski, M., & Langheinrich, M. (2022). Federated Learning for Privacy-Aware Human Mobility Modeling. Frontiers in Artificial Intelligence, 5, 867046.
[4] Brisimi, T. S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I. C., & Shi, W. (2018). Federated learning of predictive models from federated electronic health records. International journal of medical informatics, 112, 59-67.
[5] Jiang, J. C., Kantarci, B., Oktug, S., & Soyata, T. (2020). Federated learning in smart city sensing: Challenges and opportunities. Sensors, 20(21), 6230.
[6] M. Laporte, D. Gasparini, M. Gjoreski, and M. Langheinrich. “Exploring LAUREATE-the Longitudinal multimodAl stUdent expeRience datasEt for AffecT and mEmory research.” in the UbiComp/ISWC’22 Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2022.
[7] Gashi, S., Min, C., Montanari, A., Santini, S., & Kawsar, F. (2022). A multidevice and multimodal dataset for human energy expenditure estimation using wearable devices. Scientific Data, 9(1), 1-14.
[8] Gjoreski, M., Kiprijanovska, I., Stankoski, S., Mavridou, I., Broulidakis, M. J., Gjoreski, H., & Nduka, C. (2022). Facial EMG sensing for monitoring affect using a wearable device. Scientific reports, 12(1), 1-12.

For more information contact: Martin Gjoreski