With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.
I’m Luca Cuccovillo, a senior researcher at Fraunhofer IDMT (Fraunhofer Institute for Digital Media Technology IDMT, Germany), where we analyze and process media content with the help of Artificial Intelligence (AI) and Digital Signal Processing (DSP) tools.
At university I studied sound engineering and design, and then at Fraunhofer IDMT I started to focus on audio forensics and Artificial Intelligence within the media distribution and security group. You can check my LinkedIn profile and Google scholar page if you want to know more!
Together with my colleagues at Fraunhofer IDMT, in vera.ai I am developing new tools which can be used by fact-checkers to detect manipulated, synthetic, and decontextualized audio content. I see many similarities between the audio forensics domain I come from and fact-checking. In vera.ai I am trying my best to create tools that are fit for both domains.
Challenges related to disinformation analysis are countless, to be honest: traces created within content manipulated or created by AI are more and more difficult to detect, fact-checkers need hours or days to debunk fake content made up in a few minutes, tools for content analysis are difficult to use and sometimes provide contradicting results… all reasons why projects like vera.ai are absolutely crucial.
The challenge which I find most intriguing, however, is to bring forth the message that AI tools for disinformation detection can not tell us if some content is “fake” or not, but should rather provide us with clues which we can use to decide if we “trust” the content or not.
We can decide to devise a perfect tool which today is able to tell us if some audio content is synthetic or not, and it will likely become obsolete in a few months; or we can rather decide to devise a perfect tool that tells us which smartphone was used to capture the content, and check this information against the claims of the users sharing the content. The second approach is more sustainable, as it does not let us use AI as some sort of oracle. It can also serve as an example or give the reader an idea outlining what direction I am going to follow during the project's lifetime.
The biggest outcome I can envision is our audio forensics tools from Fraunhofer IDMT being used by fact-checkers all over the word, of course!
To be more concrete, I want to provide tools whose outputs can be interpreted by fact-checkers with a limited amount of training – we shouldn't all be AI experts or scientists, but neither should we use tools without reading their instructions – that support the verification process by enriching content with information to be double-checked and verified, and whose limits are stated in advance to avoid misuse. In a nutshell: Trustworthy AI for trustworthy verification workflows.
The answer is easy: Cooking! I associate good food with happy faces, and I am very proud whenever everyone is having a good time thanks to a meal I prepared. The more the merrier – as long as the dishwasher is not broken :P.
Author / contributor: Luca Cuccovillo (Fraunhofer IDMT)
Editor: Jochen Spangenberg (DW)