WHO IS BEHIND VERA.AI?

Meet: Team of University of Urbino Carlo Bo

With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.

Who are you and what is your role within vera.ai?

As a team at Uniurb, we are led by Fabio Giglietto, an Associate Professor at the Department of Communication Sciences, Humanities, and International Studies, and coordinator of the Mapping Italian News Research Program at the University of Urbino Carlo Bo. Our team includes Roberto Mincigrucci, a post-doctoral researcher with interests in journalism and problematic information, and Giada Marino, a post-doctoral research fellow specializing in polarization, problematic information, and participatory culture.

Together, we coordinate Work Package 4 (WP4) within the vera.ai project. Our objective is to discover potentially problematic content in near-real-time using a content-agnostic approach. We specifically concentrate on addressing coordinated sharing behavior – instances where actors share identical content multiple times within a brief period – by combining social media analysis and network science methods.

To learn more about our team and our work, please visit our respective online profiles or the vera.ai project website.

Fabio Giglietto, University of Urbino Carlo Bo

What challenges and trends do you currently see regarding disinformation analysis and AI supported verification tools and services (and what will you do about it, if anything)?

One of the major challenges in disinformation analysis lies in the constant evolution of disinformation tactics. Large language models (LLMs) and other AI language models can potentially pose a threat, as they may generate a higher volume of problematic content that can be easily customized and targeted toward specific groups and individuals. This could amplify disinformation and ultimately further undermine public trust in institutions and the media. As disinformation actors become more sophisticated, AI-supported verification tools and services must be continuously updated to effectively detect and counteract these tactics. AI models like LLMs can also serve as supportive tools to combat disinformation by clustering posts into narratives and identifying patterns in content to expose potential disinformation campaigns.
A notable trend in disinformation analysis involves increasing collaborative efforts among scholars and professionals with diverse backgrounds, such as engineers, computer scientists, journalists, and social scientists like ourselves. Such collaborations can contribute to the development of more effective countermeasures against disinformation, including AI-powered fact-checking tools and comprehensive media literacy programs that enable individuals to recognize and reject false information.

Giada Marino, University of Urbino Carlo Bo

What is your 2025 vision of your contributions/outcomes within the scope of vera.ai?

By 2025, our vision is to offer an effective discovery and alert service for fact-checkers, investigative journalists, and researchers, allowing them to identify potentially problematic content in near-real-time using a content-agnostic approach. This tool will enable better resource allocation by prioritizing cases with a higher potential for harm, such as those with high engagement or cross-platform reach. We aim to achieve this goal by analyzing the behavior of known problematic social media actors and platforms. Our tool will employ AI and machine learning algorithms to recognize patterns and similarities in content, helping us group posts into narratives, uncover trending problematic content, detect potential disinformation campaigns, and estimate their impact.

The tool we envision leverages researchers' APIs mandated by Article 37 of the EU's Digital Services Act for very large platforms and service providers operating within the European Union. In pursuit of this goal, we are currently collaborating with Meta's researchers product team as alpha-testers, ensuring that their solutions under development meet our application requirements. Additionally, we are eager to forge similar collaborations with other mainstream social media companies.

Roberto Mincigrucci, University of Urbino Carlo Bo

What makes you happy/content?

Overall, we take great satisfaction in seeing our work reused by others, both in terms of academic references to our papers and through investigative works, studies, and reports that utilize our tools, such as CooRnet.

What keeps you awake at night?

Nightmares about waking up in a dystopian land haunt me, a place where information is tightly controlled and freedom of speech is severely limited by a totalitarian regime. This regime has cunningly figured out how to exploit the vulnerabilities of our contemporary media ecosystem, allowing them to permanently manipulate public opinion and maintain their oppressive rule.
 

Author / contributor: Fabio Giglietto, Giada Marino, Roberto Mincigrucci (University of Urbino Carlo Bo)

Editor: Anna Schild (DW)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.