vera.ai has come to an end – its results remain accessible. More right here!

After three years and two months, the vera.ai project has formally come to an end on 31 October 2025. Since September 2022, 14 European partners have passionately worked towards the project's goal of building trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals, and to set foundations for future research in the domain of AI against disinformation and related topics. 

Although vera.ai is formally over, a lot of its work, learnings, outcomes and results will live on beyond the project's runtime - in various ways and form (for more see also below at the end of this article).

Among the project results are a variety of tools and services. In this article, we collect and provide further information about some of them - another legacy of vera.ai. Unless noted differently, these will remain accessible to the public in the foreseeable future.

vera.ai tool factsheets

The following table contains factsheets for tools and services that have been developed and/or have been improved during the project's life-cycle*. Factsheets contain information on (1) aim of the tool; (2) access options; (3) contact information; and (4) more information.
 

Tool / FactsheetShort descriptionResponsible partner
Audio Provenance AnalysisThe Audio Provenance Analysis tool enables media professionals and journalists to verify the authenticity and reuse patterns of audio materials.IDMT
B-FreeB-Free is a synthetic image detector based on a novel bias-free training paradigmUNINA
Central Claim Extraction & Narrative DetectionThe tool automatically extracts check-worthy claims and visualizes their narrative hierarchy.KInIT
Chatbot-Assisted AI Explainability and Feedback CollectionA chatbot that helps media professionals understand underlying AI models.USFD
Coordinated Sharing Detection Service The Coordinated Sharing Detection Service (CSDS) helps media professionals, analysts, and researchers uncover networks of accounts that amplify identical or near-identical content across social media.UNIURB
CooRTweetCooRTweet is an open  source R package for detecting and analysing coordinated behaviour on social media. UNIURB
Database of Known Fakes (DBKF)The DBKF allows to easily double-check whether a claim has already been debunked by trusted fact-checkersOntotext / Graphwise
Image Forgery and Localization ServiceThis service integrates multiple AI-based models to automatically detect and localize image forgeries.CERTH
Image-Text Decontextualization ApproachThis approach aims to label information that has been divorced from its original context, leading to distorted or misleading interpretationsCERTH
Keyframe Selection and Enhancement Service (KSE)The KSE automatically extracts keyframes and enhances important visual elements – such as faces and text – to make details clearer for inspection.CERTH
Location Estimation Using Solely Visual CluesThis service aims to infer the depicted location of an image – using solely its visual contentCERTH
Multilingual News Article Framing ClassifierThis service identifies the main frames used in a particular article.USFD
Multilingual News Article Genre Classifier This service is able to identify news genres in 104 languages.USFD
Multilingual News Article Persuasion Technique ClassifierThis service is able to identify the main persuasion techniques used in a particular article.USFD
Near Duplicate Detection ServiceThis service is a robust and versatile audio/visual search solution offering multiple search modalities.CERTH
POI-ForensicsAn audio-visual person-of-interest deepfake detectorUNINA
Syntetic Image DetectionThis service integrates multiple AI-based models to automatically detect and analyze AI-generated or manipulated images. CERTH
Synthetic Speech DetectionThe Speech Synthesis Detection tool helps media professionals and journalists accurately identify AI-generated speech.IDMT
Topical and Temporal Analysis ToolThe topical and temporal analysis demo is designed to assist media professionals, journalists and social media analysts in exploring how disinformation narratives develop and interact over time.USFD
TruForA forensic tool for image manipulation detection.UNINA
Truly MediaTruly Media is a collaborative verification platform that enables teams to work together in real time in order to collect, organize, and verify digital content.ATC/DW
Verification PluginThe Verficiation Plugin brings together tools (such as a set of tools developed within vera.ai) that support the verification of digital content.AFP
Video Deepfake Detection ServiceThis service integrates two complementary deepfake detection models. CERTH

*list is not exhaustive

More results

For more results and outcomes, visit our publication section, check out the presentations that were held and are publicly shared, or have a look at the code and datasets we made available. Furthermore, we invite you to read through the articles written about various activities throughout the project duration.

 

Author: Anna Schild (DW)

Editor: Jochen Spangenberg (DW)

Factsheets: Provided by responsible partners

 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.