Meet: Kamila Koronska

With the interview format WHO IS BEHIND VERA.AI? we subsequently present the people who make up the project, their role within vera.ai as well as their views on challenges, trends and visions for the project's outcomes.

Who are you and what is your role within vera.ai?

Hello, I'm Kamila Koronska, a Research Officer at the University of Amsterdam, specialising in the study of misinformation. I became a part of UvA in September 2022 to contribute to the vera.ai project. In the context of this project my focus lies on researching user needs related to verification technology. Our most recent project involved fact-checking ethnography, which I found super interesting.

Within Work Package 4 Analysis of Complex Disinformation Phenomena, I am responsible for measuring the impact of online misinformation. There are various methods to approach this research, such as measuring people's engagement with problematic content or assessing the visibility of misinformation in mainstream media.

On occasion, and when time allows, I also freelance as a (broadcast) journalist.

Kamila Koronska, University of Amsterdam

What challenges and trends do you currently see regarding disinformation analysis and AI supported verification tools and services (and what will you do about it, if anything)?

I'm not a physicist, but it seems that injecting chaos into the system is much easier than removing it. Given this disadvantage, I am aware that finding effective means to remove misleading, inaccurate or otherwise harmful content from the Internet is tricky. Because of that, I believe that until the major platforms get actively involved, we will continue to struggle to control misinformation.

When it comes to challenges, more specifically, I think that the old-fashioned methods of misleading social media users still hold. One significant concern I have is the ease with which unverified content from uncensored, so-called fringe platforms rapidly disseminates on mainstream social media. We witness this in the context of recent global conflicts. This is problematic because such content is often shared without proper scrutiny by those who repost it.

Furthermore, I've noticed an increase in official accounts posing as journalistic sources, even when their credibility is questionable. Just because something has a polished interface doesn't necessarily mean it's genuine journalism. After all, all that glitters is no gold. 

What is your 2025 vision of your contributions/outcomes within the scope of vera.ai?

I hope that through our work, we will gain a more comprehensive understanding of the needs of the fact-checking, research, and journalistic communities, so they can do their work more easily.

I hope we can identify areas that require improvement or are currently progressing slowly and introduce innovative solutions that enable creative approaches to addressing these problems. I'm particularly excited about our work in WP4 on coordinating behaviours, as it is a good example of doing exactly that.

But to reiterate, we are dealing with a highly complex issue, and the advancements in GEN AI only add to the challenge. We are trying to  find solutions that will be effective across different modes, languages, and platforms, which is undoubtedly challenging. However, I hope this challenge will be seen as a motivation rather than an obstacle for all involved.

What keeps you awake at night?

News, quite literally. 

Author / contributor: Kamila Koronska (University of Amsterdam)

Editor: Anna Schild (DW)

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.