Deep convolutional neural networks are powerful image and signal classifiers. One hypothesis is that kernels in the convolutional layers act as feature extractors, progressively highlighting more domain-specific features in upper layers of the network. Thus lower-level features might be suitable for transfer. We analyse this in wearable activity recognition by reusing kernels learned on a source domain on another target domain. We consider transfer between users, application domains, sensor modalities and sensor locations. We characterise the trade-offs of transferring various convolutional layers along model size, learning speed, recognition performance and training data. Through a novel kernel visualisation technique and comparative evaluations we identify what learned kernels are predominantly sensitive to, amongst sensor characteristics, motion dynamics and on-body placement. We demonstrate a ~17% decrease in training time at equal performance thanks to kernel transfer and we derive recommendations on when transfer is most suitable.
Funding
Is deep learning useful for wearable activity recognition?; G1460; GOOGLE