Improving Synthetically Generated Image Detection in Cross-Concept Settings

Achieving generalisation is one of the fundamental challenges in machine and deep learning methods. In an effort to contribute to the investigation of such an important challenge, the MeVer group performed a study on the setting of detecting synthetic images across different concept classes, i.e. synthetic image detection in a cross-concept scenario. This arises in cases, e.g., when training a detector on human faces and testing on synthetic animal images. Our study highlighted the ineffectiveness of existing approaches that randomly sample generated images to train their models (see image below).

Ineffective method for synthetic image detection.MeVer Group

To address this, we proposed an approach based on the premise that the robustness of the detector can be enhanced by training it on realistic synthetic images that are selected based on their quality scores according to a probabilistic quality estimation model. Specifically, synthetically generated images are evaluated based on a quality assessment model trained on real images. The top-𝑘 generated images are then selected and provided, along with real images, as input to a deep learning model, which is trained to discriminate between real and fake images. An overview of the approach is presented in the following image.

The MeVer approach for synthetic image detection.MeVer Group

The outcome of this research revealed that there is an improvement on the generalisation performance when training with detectors with higher-quality images. 

Although our results are promising and have already been integrated in a synthetic image detection service that is available to vera.ai partners, there is still considerable room for improvement in the development of detectors that can generalise better across different concept classes.

The work of ‘Improving Synthetically Generated Image Detection in Cross-Concept Settings’ was presented at the MAD23 Workshop co-located with ICMR 2023 in Thessaloniki.

For more details on this work you can refer to the paper available in the proceedings of the 2nd ACM International Workshop on Multimedia AI against Disinformation, June 2023.

 

Authors: Olga Papadopoulou, Akis Papadopoulos (CERTH-ITI)

Editor: Anna Schild (DW)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.