Here you can find announcements about where you can meet us or see us in action – be it physically (i.e. at conferences, workshops etc.) or virtually (on whichever platform it may be).
In October 2025, the University of Urbino team presented multiple research outputs related to vera.ai across two major conferences in Brazil, bringing together scholars from all over the world, and beyond, to examine how disinformation spreads, evolves, and shapes our digital landscape.
From 12–15 September, IBC2025 took place in Amsterdam. It's the leading global event at which the media, entertainment, and technology industries gather to share insights, showcase innovation, and shape the future of content. We were there, too, and had the opportunity to showcase tools and services coming out of the project.
The EBU's Lalya Gaye and DW's Anna Schild and Eva Lopez teamed up to dive deep into how best to achieve trustworthiness by design in AI-based fact-checking tools. They devised a design framework based on exploring user needs and evaluating tools produced in the project with respective users. Some of all this was recently presented at the HCI International Conference 2025. Paper to follow soon.
On June 30, 2025, Chicago, USA became the meeting point for experts and researchers to fight disinformation: The 4th edition of the ACM International Workshop on Multimedia AI against Disinformation took place as part of the ACM Internation Conference on Multimedia Retrieval (ICMR). vera.ai's Thomas Le Roux of Fraunhofer IDMT provides a short recap of this year’s workshop.
Join us for the Synthetic Image Detection Challenge, a task of the MediaEval 2025 workshop organized by the EU-funded projects vera.ai and AI-CODE. In this article, you find out more about what the challenge is about, respective dates and deadlines, as well as requirements. We look forward to your creative solutions and insightful contributions.
On 24 June 2025, more than 70 participants joined the second public vera.ai webinar in which project npartners presented key research outcomes and technological advancements developed over the past three years. The sessions were geared towards researchers and professionals, focussing on the cutting-edge AI models and methodologies that power the verification tools offered by vera.ai. Here's a recap of the event, including access to related materials.
On 24 June 2025 we hosted the second edition of two sessions in which we presented project outcomes. Following edition 1 that primarily targeted practitioners such as journalists, fact-checkers and investigators, edition 2 had the research community as its primary target audience. In other words: those developing cutting edge tools and services in vera.ai that support in content analysis to counter disinformation. Here is a recording of the entire webinar.
Here it is: the recording of our exclusive webinar that was hosted on 17 June 2025 - making it part of the ultimate vera.ai video series. Its aim: to present some of the project outcomes and results to date, focussing on tools and services for fact-checkers, journalists, investigators and other practitioners active in countering disinformation.
On 17 June 2025 we hosted the first session in which we presented project outcomes. It primarily targeted practitioners, such as journalists, fact-checkers, investigators and anyone interested in hands-on advice and solutions. Over 100 people attend the webinar. Here's a recap, resources and a recording of the entire event.
On 17 and 25 June 2025, the vera.ai consortium invites interested individuals to attend two online events in which project outcomes will be presented. The sessions on 17 June will focus on vera.ai tools and services, with the primary target audience being journalists and fact-checkers. Sessions on 25 June will present deep dives into research, aiming to reach primarily academics and researchers.
vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities.
This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.