Announcements
- March 5th: Deadline for paper submission has been extended to March, 22nd!
- Jan 21st: Training and validation split of the challenge are available.
- Jan 21st: DFAD2024 will be held in conjunction with CVPR 2024 - website is online!
The Workshop
Machine-generated images are becoming more and more popular in the digital world, thanks to the spread of Deep Learning models that can generate visual data like Generative Adversarial Networks, and Diffusion Models. While image generation tools can be employed for lawful goals (e.g., to assist content creators, generate simulated datasets, or enable multi-modal interactive applications), there is a growing concern that they might also be used for illegal and malicious purposes, such as the forgery of natural images, the generation of images in support of fake news, misogyny or revenge porn. While the results obtained in the past few years contained artefacts which made generated images easily recognizable, today’s results are way less recognizable from a pure perceptual point of view. In this context, assessing the authenticity of fake images becomes a fundamental goal for security and for guaranteeing a degree of trustworthiness of AI algorithms. There is a growing need, therefore, to develop automated methods which can assess the authenticity of images (and, in general, multimodal content), and which can follow the constant evolution of generative models, which become more realistic over time.
The second Workshop and Challenge on DeepFake Analysis and Detection (DFAD) focuses on the development of benchmarks and tools for Fake data Understanding and Detection, with the final goal of protecting from visual disinformation and misuse of generated images and text, and to monitor the progress of existing and proposed solutions for detection. Moreover, with the growing amount of generation models, the challenge of generated content detection should be generalizable to content generated by models that were unseen during the training phase. It fosters the submission of works that identify novel ways of understanding and detecting fake data, especially through new machine learning approaches capable of mixing syntactic and perceptive analysis.
📃 Read the CfP and submit your paper
🚀 The Challenge
In parallel to soliciting the submission of relevant scientific works, the Workshop hosts a competition on deepfake detection. This is organised with the support of the ELSA project - the European Lighthouse on Secure and Safe AI, which builds on and extends the existing internationally recognized and excellently positioned ELLIS (European Laboratory for Learning and Intelligent Systems) network of excellence. The objective of the challenge is to monitor and evaluate the development of algorithms for deep fake detection, in terms of efficacy and explainability.
Submitted papers do not need to be linked with the challenge.
🏅 Get started with the challenge
Keynote Speakers
Cristian Canton Ferrer
Head of Responsible AI (RAI) at Meta
Rita Cucchiara
University of Modena and Reggio Emilia
Shiran Ganor
AI Researcher, Clarity
Organizers
Lorenzo Baraldi
University of Modena and Reggio Emilia
Alessandro Nicolosi
Leonardo SpA
Dmitry Kangin
Lancaster University
Tamar Glaser
Meta AI
Plamen Angelov
Lancaster University
Tal Hassner
Meta AI