Meet us in Nuremberg at the IEEE WIFS 2023 Conference!

The WIFS conference is the major scientific event organized by the IEEE Information Forensics and Security Technical Committee. The 2023 edition of the conference  takes place from 4-7 December in the German city of Nuremberg.

Marina Gardella (Ecole Normale Supérieure Paris-Saclay, France) and Artem Yaroshchuk, Milica Gerhardt and Luca Cuccovillo (Fraunhofer Institute for Digital Media Technology IDMT, Germany) of the VERA.AI team will present several works, which will also become part of the core VERA.AI technology available for our community of test users in the months ahead.

A first work, titled “An Open Dataset of Synthetic Speech Detection”, will be presented by Artem Yaroshchuk on 5 December 2023. It describes a new speech dataset, containing both synthetic and natural speech in three languages – English, Spanish, and German, designed for being easy to extend and for complying with the European regulations on data protection. In VERA.AI, this dataset will be used to address multi-language research on synthetic audio detection, and to establish collaborations with new researchers fighting disinformation in the EU.

Milica Gerhardt will present a second work, entitled “Advancing Audio Phylogeny: A Neural Network Approach for Transformation Detection”. This also takes place on 5 December. It proposes a novel approach to audio phylogeny, i.e. the detection of relationships and transformations within a set of near-duplicate audio items. In VERA.AI, this method will be used to analyze groups of social media posts by detecting a source and establishing the order in which the files were created from each other.

One day later, on 6 December, Marina Gardella will introduce a third work, entitled “PRNU-based source camera statistical certification”. Given an image and a camera – or a few reference pictures captured by it – this paper proposes an algorithm to certify if the image was taken with the camera in question or not. In VERA.AI, this method will be used to verify that a picture was captured by the person who claims its authorship, and not rather generated by a neural network trained ad hoc, or an online service.

A fourth work, entitled “Audio Spectrogram Transformer for Synthetic Speech Detection via Speech Formant Analysis”, will be showcased by Luca Cuccovillo on 7 December. Given a file containing speech content, this paper proposes an algorithm to detect if the speech was generated by a text-to-speech algorithm, or rather by recording a human speaker. In VERA.AI, this method might be used to help journalists detect synthetic content, warning them that the melody and color of the voice do not sound as they should. 

Would you like to know more? Then come to Nuremberg and get in touch with us! We would be happy to chat with you and to enjoy a cup of mulled wine at the traditional Nuremberg Christmas market. And don't forget to check our Zenodo community, featuring all the research papers and datasets published by the VERA.AI team!

Author: Luca Cuccovillo (IDMT)

Editor: Jochen Spangenberg (DW)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.