|Project name||VERification Assisted by Artificial Intelligence|
|Project Duration||15 Sep 2022 – 14 Sep 2025|
|Funding programme:||Horizon Europe Framework Programme|
|Expected Outcome||Advanced AI solutions against advanced disinformation techniques for media professionals|
|Project Budget||7,046,250 € (all 14 partners)|
|(Max.) EU contribution||5,691,875 € (12 EU partners)|
|Funding authorities||EU Horizon Europe (Grant Agreement No. 101070093);the UK’s innovation agency (Innovate UK) grant No. 10039055; and the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract No. 22.00245|
Digital disinformation of various types and forms (audio, video, images, and text) poses a serious threat to the functioning of open democratic societies, public discourse, the economy, social cohesion and more. Advances in technology are making the creation of manipulations and deceptions increasingly easier all the time, requiring less skills and expertise. At the same time, it is becoming more difficult to detect what has been digitally altered or manipulated without expert knowledge and skills.
While advances of technology development are used in the area of disinformation production, it is vitally important that technology developments "keep up" when it comes to detecting manipulations and forgeries, therewith countering disinformation.
This is where vera.ai comes in.
It is the aim of the vera.ai project and its partners to develop and build trustworthy AI (Artificial Intelligence) solutions in the fight against disinformation. These are to be co-created in close collaboration between leading technology experts in the domain and prospective future users–all brought together in the vera.ai project consortium, following a multidisciplinary co-creation approach. All this is to deliver solutions that can be used by the widest possible community such as journalists, investigators, researchers and such like, while also setting the foundations for future research and development in the area Artificial Intelligence against disinformation. The expected solutions will deal with different content types (audio, video, images, and text) and do so across a variety of languages. They will mostly be open and accessible to and usable by anyone.
The vera.ai team will also continue and further enhance work that has been started in the "forerunner project" WeVerify (another EU co-funded project dealing with disinformation detection. It ran for three years and ended in November 2021. Seven of the 14 vera.ai project partners were also involved in WeVerify). One part of this is using and building on the extremely successful InVID-WeVerify verification plug-in, a browser extension that combines a great variety of verification and analysis features in one platform. Currently (late 2022), the Chrome version of the InVID-WeVerify verification plug-in has more than 75,000 monthly users and is considered one of the de-facto standard tools for journalists and other digital investigators.
Depending on suitability and ownership matters, In addition to the InVID-WeVerify verification plug-in, several solutions and components being developed in vera.ai are to be integrated into other existing services (such as the collaborative verification platform Truly Media), and/or will be made available as stand-alone tools (meaning they can operate on their own).
The project team is looking forward to an exciting and challenging undertaking in which all partners will do their best in supporting the fight against disinformation with the support and sensible use of Artificial Intelligence.
Author: Jochen Spangenberg (DW)
Editor: Olga Papadopoulou (CERTH)