Horus
The Horus Project is an ambitious and multidisciplinary endeavor aimed at contributing to the landscape of artificial intelligence (AI) research and to digital media trust. Comprising five distinct and interconnected research lines — robust feature learning (A), open-set recognition (B), self-supervised learning (C), multi-modality learning (D), and fusion techniques (E) — the project seeks to address critical challenges in AI development, such as enhancing model generalization, enabling cross-modal understanding, and promoting self-supervised learning paradigms. The research lines serve as the backbone for the development of eight diverse applications, addressing pressing challenges in the digital era. These applications include (A1) deepfake and (A2) general synthetic media detection, combating the rising threat of synthetic media manipulation; (A3) authorship attribution, bolstering content authenticity and integrity; (A4) phishing detection, safeguarding users against malicious online activities; (A5) fact-checking, promoting information accuracy and combating misinformation; (A6) scientific forgery detection, preserving the credibility of scholarly publications; (A7) presentation and injection attack detection, thwarting cyber threats in various biometric domains; and (A8) AI-enabled child-pornography detection, reinforcing efforts to protect vulnerable populations. The project represents a paradigm shift in artificial intelligence research, putting humans at the forefront, paving the way for ethical and trustworthy applications, and contributing significantly to the betterment of society as a whole. The expected results comprise new solutions and methodologies for digital trust and a safer digital landscape.