WHO IS BEHIND VERA.AI?

Meet: Ivan Srba

With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.

Who are you and what is your role within vera.ai?

I am Ivan Srba, an enthusiastic AI researcher, data scientist, web developer, and a leader of the vera.ai task on what is called “automatic credibility assessment”. My current position is senior researcher at the Kempelen Institute of Intelligent Technologies (KInIT), located in Bratislava, Slovakia. You can find more information about my research and activities in the respective section of KInIT’s website and my Google Scholar profile.

Ivan Srba, KInIT

What challenges and trends do you currently see regarding disinformation analysis and AI supported verification tools and services (and what will you do about it, if anything)?

My biggest concern relates to generative AI, especially large language models (LLMs) that have undergone progress in the past months that has been unrivalled, never seen before. I am observing - with deep admiration as well as respect - their capability to generate multilingual content, which is difficult or even impossible to distinguish (by a human or a detection algorithm) from a human-written text. While such AI opens up amazing opportunities to address difficult or completely new tasks or increases the performance of many existing solutions, it poses undoubtedly a significant threat when used to automatically create and spread content that is meant to mislead, confuse, or disinform.

Within the vera.ai project we thoroughly examine how large language models can be misused to generate disinformation content. We aim to identify such combinations of used language models, topics, and languages that may represent the biggest risk. These finding can consequently help researchers as well as creators of language models to direct the next steps in making generative AI safer and more trustworthy for society.

What is your 2025 vision of your contributions/outcomes within the scope of vera.ai?

At first, I expect that we will be able to provide novel tools to automatically assess the credibility of online content. Providing media professionals with the ability to quickly examine the credibility signals in text may help them to more effectively decide whether such content is trustworthy and can be cited, or quite the opposite, namely that it may spread disinformation and be worth of fact-checkers' attention.

Secondly, I anticipate a deeper understanding of large language models’ potential to generate disinformation content and a service which will allow for recognizing and detecting such automatically generated content.

Since both these expected outcomes are in line with the needs of media professionals, I believe that our contributions will help to combat disinformation more effectively. Further, this would put novel services that are currently unavailable into the hands of media professionals.

What makes you happy/content?

A positive review from Reviewer 2, traveling and time spent with my family.

 

Author / contributor: Ivan Srba (KInIT - Kempelen Institute of Intelligent Technologies)

Editor: Jochen Spangenberg (DW)

 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.