With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.
I am Miguel Colom, a senior researcher at Centre Borelli (ENS Paris-Saclay, France), where we gather research on mathematics, computer science, and neurosciences. My background is diverse, including computer science and mathematics. At Centre Borelli I am involved in quite different projects – from image forensics to satellite imaging. I am also one of the founding editors of the IPOL journal, I am teaching two subjects about reproducible research and open science in our “MVA” masters, and I supervise several PhD candidates on different subjects. For more information about me: http://mcolom.info/
Our forensics group is made up of PhD candidates, post-docs, and senior researchers working on the detection of falsifications in images and video, including deepfakes. In vera.ai we are participating in several tasks, including the topic Multimodal Content Verification (mainly, detecting falsifications in images and videos). We are also coordinating a task on improving two existing tools, which have a great potential to help journalists and other professionals in their fight against disinformation. One is an automatic geolocation system by CERTH, which is able to provide the coordinates on the globe based on a single image. The other is a text location, extraction and translation tool (OCR) by USFD.
One challenge that I see: Nowadays, the technology to produce synthetic images and videos has advanced enough to become available to the general public. This is, of course, something great! But it can also be used to easily produce false images and videos for disinformation purposes. Furthermore, this forgeries can be quite sophisticated, such as the ones produced by generative networks. We have all seen examples of deepfake videos, which can cheat – not only inexperienced viewers – but also journalists and sometimes even experts.
Fortunately for all, we know that many forgery methods, including generative networks, may leave detectable low-level traces in the images and videos. We can use them as clues to determine in a reliable way whether the respective media might be a falsification. This includes JPEG grid analysis, noise discrepancies, or detectable patterns in generated videos, among others.
We also rely on the expertise of journalists, fact checkers and other experts working on the subject. To this purpose, we want to provide not only automatic detection tools, but also tools which can be used by these professionals to decide about the legitimacy of the data they work with. Examples of this are the OCR and geolocation tools on which we currently work.
Our team works actively on developing new detection methods for forgery detection, and luckily we are working with partners such as Agence France-Presse (AFP), Deutsche Welle (DW), and others, which can provide real images and videos to test our methods. Thus, I am quite confident that in 2025 we will have been able to give back to the community methods which can be applied by journalists in their day to day work.
We expect that, of course, generative methods will also evolve and be able to produce forged images which look much more realistic and more difficult to spot. However, we believe that they will leave some detectable traces that we can still discover to detect them as forgeries.
Providing automatic detection methods, as well as interactive tools to make the work of journalists easier would be a great outcome for us!
I am quite a night owl by default, so it’s rather easy to keep me awake at night. When I was a student, a long time ago, I lived at the Colegio de España (Cité Universitaire) in Paris, where I spent many nights with friends chatting about so many topics until morning!
Unfortunately, I can’t do that anymore but, for good or for bad, I still procrastinate a lot at night watching online videos, series, and chatting in some groups.
Author: Miguel Colom (Centre Borelli, ENS)
Editor: Anna Schild (DW)