Two-tiered facial verification for mobile devices

Mobile devices had their popularity and affordability greatly increased in recent years. As a consequence of their ubiquity, these devices now carry all sorts of personal data that should be accessed only by their owner. Even though knowledgebased procedures are still the main methods to secure the owner’s identity, recently biometric traits have been employed for a more secure and effortless authentication. In this work, we propose a facial verification method optimized to the mobile environment. It consists of a two-tiered procedure that combines hand-crafted features and a new convolutional neural network – HF-CNN –, an architecture tweaked for mobile devices that processes encoded information of a pair of face images. We also propose a technique to adapt our method’s acceptance cutoff to images with different characteristics than those present during training, by using the device owner’s enrollment gallery. The proposed solution outperforms state-of-the-art face verification methods, while having a model 16 times smaller and 4 times faster when processing an image in recent smartphone models. Finally, we present a new dataset of selfie pictures – RCD selfie dataset – that hopefully will support future research in this scenario.