Transportation and locomotion mode recognition from multimodal smartphone sensors is useful to provide just-in-time context-aware assistance. However, the field is currently held back by the lack of standardized datasets, recognition tasks and evaluation criteria. Currently, recognition methods are often tested on ad-hoc datasets acquired for one-off recognition problems and with differing choices of sensors. This prevents a systematic comparative evaluation of methods within and across research groups. Our goal is to address these issues by: i) introducing a publicly available, large-scale dataset for transportation and locomotion mode recognition from multimodal smartphone sensors; ii) suggesting twelve reference recognition scenarios, which are a superset of the tasks we identified in related work; iii) suggesting relevant combinations of sensors to use based on energy considerations among accelerometer, gyroscope, magnetometer and GPS sensors; iv) defining precise evaluation criteria, including training and testing sets, evaluation measures, and user-independent and sensor-placement independent evaluations. Based on this, we report a systematic study of the relevance of statistical and frequency features based on information theoretical criteria to inform recognition systems. We then systematically report the reference performance obtained on all the identified recognition scenarios using a machine-learning recognition pipeline. The extent of this analysis and the clear definition of the recognition tasks enable future researchers to evaluate their own methods in a comparable manner, thus contributing to further advances in the field. The dataset and the code are available online.
Funding
Activity Sensing Technologies for Mobile Users; G2015; Huawei Technologies; Activity Sensing Technologies for Mobile Users