Unsupervised and Self-Adaptative Techniques for Cross-Domain Person Re-Identification

Person Re-Identification (ReID) across non-overlapping cameras is a challenging task, and most works in prior art rely on supervised feature learning from a labeled dataset to match the same person in different views. However, it demands the time-consuming task of labeling the acquired data, prohibiting its fast deployment in forensic scenarios. Unsupervised Domain Adaptation (UDA) emerges as a promising alternative, as it performs feature adaptation from a model trained on a source to a target domain without identity-label annotation. However, most UDA-based methods rely upon a complex loss function with several hyper-parameters, hindering the generalization to different scenarios. Moreover, as UDA depends on the translation between domains, it is crucial to select the most reliable data from the unseen domain, avoiding error propagation caused by noisy examples on the target data — an often overlooked problem. In this sense, we propose a novel UDA-based ReID method that optimizes a simple loss function with only one hyper-parameter and takes advantage of triplets of samples created by a new offline strategy based on the diversity of cameras within a cluster. This new strategy adapts and regularizes the model, avoiding overfitting the target domain. We also introduce a new self-ensembling approach, which aggregates weights from different iterations to create a final model, combining knowledge from distinct moments of the adaptation. For evaluation, we consider three well-known deep learning architectures and combine them for the final decision. The proposed method does not use person re-ranking nor any identity label on the target domain and outperforms state-of-the-art techniques, with a much simpler setup, on the Market to Duke, the challenging Market1501 to MSMT17, and Duke to MSMT17 adaptation scenarios.