Warning: The magic method LA_IconManager::__wakeup() must have public visibility in /var/opt/www/usi_inf_pc/docs/wp-content/plugins/superfly-menu/includes/vendor/looks_awesome/icon_manager/IconManager.php on line 753
Deep learning models for human mobility modeling | People-Centered Computing @ USI Lugano

Student Projects

Deep learning models for human mobility modeling

Type: Bachelor Master UROP
Status: Completed June 2021
Student: Roland Holenstein

Human mobility modeling is widely recognized as being key to providing new services and solutions in many application domains. For example, tracking viral diseases dynamics (e.g., the COVID-19 pandemics), tracking changes in behavior which can be used to recognize impending mental health episodes, deliver more effective advertising and retail experiences, enhance security and shape the provision of urban services, etc.

In the context of a BSc project, you will implement an existing state-of-the-art method for human mobility modeling (e.g., [5],[6]) and you will apply the method on a new dataset (e.g., [1] or [2]). Also, you will analyze the behavior of the method for a variety of parameters (e.g., dataset size, model size, etc.). Finally, you will report on the results and propose possible improvements/changes that could be applied on the method for improving the results.

In the context of an MSc thesis or UROP project, you will: (i) research datasets and machine learning methods for human mobility modeling (e.g., [1][2][3][4][5][6]); (ii) pre-preprocess/normalize several mobility datasets to a common format (e.g., [1][2]); (iii) implement advanced deep-learning methods for human mobility modeling (e.g., [5],[6]) and compare their results on at least two datasets; (iv) summarize the results and propose future work.

References:

  1. Mokhtar, Sonia Ben, Antoine Boutet, Louafi Bouzouina, Patrick Bonnel, Olivier Brette, Lionel Brunie, Mathieu Cunche et al. “PRIVA’MOV: Analysing Human Mobility Through Multi-Sensor Datasets.” 2017 [https://hal.inria.fr/hal-01578557/document]
  2. Moro, Arielle, Vaibhav Kulkarni, Pierre-Adrien Ghiringhelli, Bertil Chapuis, and Benoit Garbinato. “Breadcrumbs: A Feature Rich Mobility Dataset with Point of Interest Annotation.” arXiv preprint arXiv:1906.12322 (2019) [https://arxiv.org/pdf/1906.12322.pdf]
  3. Luca, Massimiliano, Gianni Barlacchi, Bruno Lepri, and Luca Pappalardo. “Deep Learning for Human Mobility: a Survey on Data and Models.” arXiv preprint arXiv:2012.02825 (2020). [https://arxiv.org/pdf/2012.02825.pdf]
  4. Feng, Jie, Zeyu Yang, Fengli Xu, Haisu Yu, Mudan Wang, and Yong Li. “Learning to Simulate Human Mobility.” In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3426-3433. 2020.
    [https://www.youtube.com/watch?v=sj4UCW0P6Ks&ab_channel=AssociationforComputingMachinery%28ACM%29]
  5. Feng, Jie, Yong Li, Chao Zhang, Funing Sun, Fanchao Meng, Ang Guo, and Depeng Jin. “Deepmove: Predicting human mobility with attentional recurrent networks.” In Proceedings of the 2018 world wide web conference, pp. 1459-1468. 2018. [https://github.com/vonfeng/DeepMove]
  6. Yu, Lantao, Weinan Zhang, Jun Wang, and Yong Yu. “Seqgan: Sequence generative adversarial nets with policy gradient.” In Thirty-first AAAI conference on artificial intelligence. 2017. [https://github.com/LantaoYu/SeqGAN]
  7. scikit-mobility: mobility analysis in Python
  8. Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan et al. “Language models are few-shot learners.” arXiv preprint arXiv:2005.14165 (2020). [https://arxiv.org/abs/2005.14165]
  9. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.” In Advances in neural information processing systems, pp. 5998-6008. 2017. [https://arxiv.org/abs/1706.03762]
  10. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018). [https://arxiv.org/pdf/1810.04805.pdf?source=post_elevate_sequence_page—————————]
  11. “Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing”. Google AI Blog. Retrieved 2019-11-27.
  12. https://www.tensorflow.org/tutorials/text/transformer
  13. https://keras.io/examples/nlp/text_classification_with_transformer/
  14. https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
  15. https://heartbeat.fritz.ai/stylegans-use-machine-learning-to-generate-and-customize-realistic-images-c943388dc672

For more information contact: Martin Gjoreski