Press release by KINIT on the vera.ai launch.

vera.ai: VERification Assisted by Artificial Intelligence

vera.ai is a Horizon Europe project aimed at basic and applied research of AI methods to fight false information. The vera.ai project focuses on textual, multilingual and multimodal content, and puts strong emphasis on context and inter-content relationships. In this project, KInIT is a part of a consortium of first class researchers and practitioners in this field.

Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Recent advances in AI have enabled the creation of highly realistic synthetic content and its artificial amplification through AI-powered bot networks. Consequently, it is extremely challenging for researchers and media professionals to assess the veracity/credibility of online content and to uncover the highly complex disinformation campaigns.

vera.ai seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers. We will also set the foundation for future research in the area of AI against disinformation.

Key novel characteristics of the AI models will be:

  • fairness
  • transparency (incl. explainability)
  • robustness against concept drifts
  • continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach,
  • ability to handle multimodal and multilingual content. 

Recognising the perils of AI generated content, vera.ai will develop tools for deepfake detection in all formats (audio, video, image, text). KInIT’s role in the project will focus on multilingual NLP tasks, as well as identification of disinformation campaigns and narratives.

The vera.ai project adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.

Results will be validated by professional journalists and fact-checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.

Project goals:

  • Deliver multilingual and multimodal trustworthy AI methods for content analysis, enhancement, and evidence retrieval to assist in disinformation detection and content verification. 
  • Deliver multimodal trustworthy AI tools for the detection of deepfake, synthetic media and manipulated content, including AI generated and manipulated content, namely images, videos, audio and text. 
  • Enable the discovery, tracking, and impact measurement of disinformation narratives and campaigns across social platforms, modalities, and languages, through integrated AI and network science methods.
  • Provide an intelligent verification and debunk authoring assistant, based on chatbot NLP technology.
  • Pursue a fact-checker-in-the-loop approach to seamlessly gather new actionable feedback as a side effect of verification workflows, to continuously adapt the AI models.
  • Ensure adoption and sustainability of the new AI tools in real-world applications through integration in leading verification tools.

 

You can find the original press release here.

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.