On 29 June 2023 the day had finally come. Horizon Europe research projects AI4Media, AI4Trust, TITAN, and vera.ai – together with the European Commission – welcomed about 90 participants on-site at VRT in Brussels for the event "Meet the Future of AI: Countering Sophisticated & Advanced Disinformation".
During the day, various aspects surrounding the development and use of Artificial Intelligence and its relationship to the disinformation sphere were discussed and showcased.
Here, we provide a summary and some takeaways from the event.
After a welcome by the organisers, Luc Van Bakel from the VRT NWS data, disinformation and technology (DDT) team set the scene for the event, also welcoming everyone on behalf of host VRT.
This was followed by greetings and a warm welcomed by Krisztina Stump from the European Commission, Head of the Unit in which all organizing projects are located (namely Media Convergence and Social Media - CNECT.I.4), who shared an inspiring message via pre-recorded video.
With the stage set and the attendees eager to explore the future of AI, the conference embarked on a captivating journey of exploration in which experts shared their knowledge, insights, and innovative approaches, as well as respective challenges encountered on the way.
Gina Neff, Professor at the University of Cambridge, chaired the opening panel. Before handing over to the speakers, she pointed out the chances this conference provided: bringing together stakeholders, from different projects and beyond, to address the challenges professionals and civil society face with regard to AI just as pointing to the opportunities it provides.
In the session Marc Faddoul, director and founder of AI Forensics, Henry Parker, Head of Government Affairs of Logically AI and Olle Zachrison, News Commissioner at Swedish Radio and co-founder of the Nordic AI Journalism Network shared their believes and assessments on Artificial Intelligence and disinformation, with a focus on the potential threats it poses, the opportunities it offers and action it requires.
One of the most critical aspects in recent developments, according to Marc Faddoul, can be found at the intersection between humans and AI. While language models empower humans with the capabilities of AI, they are not designed to prioritise truth but rather replicate what they were trained on. Therefore, one of the core concerns is the intentional use of language models to create disinformation, according to Marc.
How close the use of AI for fostering and tackling mis- and disinformation is became clear in the contribution by Henry Parker. He introduced Coordinated Inauthentic Behavior (CIB), which can be regarded as a driving power in the spread of disinformation. Speeding this process up, creating hyper-realistic and viral content and targeting people more efficiently, are just few of the new possibilities recent developments in AI pose. Logically's proposed approach to detecting CIB is using a combination of machine learning and open-source intelligence (OSINT), employing a "human in the loop" model.
Another perspective on the topic was given by Olle Zachrison who focused on how public service media (PSM) can address the challenge of disinformation amplified by generative AI. He highlighted the need for Public Service Media (PSM) to improve their ability to detect disinformation and establish better structures and policies. Collaboration with trusted partners is crucial in this effort, he added. A respective example is the European Broadcasting Union's news sharing initiative, called "A European Perspective", which involves sharing trusted PSM news and fact-checks from different European countries.
Marc summarized his intervention by stating that “transparency and scrutiny will remain key” to provide AI-systems users can trust and that do not foster mis- and disinformation.
The second panel focused on the potential applications of existing mechanisms and new regulations in addressing the issue of disinformation. The session was moderated by Alexandre Alaphilippe from EU DisinfoLab. Speakers were Alberto Rabacchin from the European Commission's DG Connect (and Deputy Head of Unit), Noemie Krack from CITIP, KU Leuven, François Lavoir from the European Broadcasting Union (EBU), Paula Gori, Secretary General of EDMO and Luca Bertuzzi, technology journalist specialising in digital policy at EURACTIV.
Alberto opened the panel by stating that the four present projects are symbols for have seeing recent developments in AI coming and highlighted the significant investments being made to address related issues by the EU. He further provided an overview of relevant regulations being worked on, including the AI Act, the Digital Services Act (DSA), and the Code of Practice against Disinformation.
To this, Noemie added the General Data Protection Regulation (GDPR) and competition rules, which address transparency, compliance, and harm prevention, in the row of regulations.
However, while it became clear that there are several regulatory interventions in progress, Luca pointed out the specific challenges of regulating AI-systems by asking: “How do you regulate a technology that changes fundamentally every few months?"
Despite this question, various speakers agreed on the need for multi-stakeholder discussions on the regulation of AI. Accordingly, collaboration between technical, legal, and human rights experts, and policymakers to address the challenges posed by generative AI are indispensible. Furthermore, it was agreed that it is necessary more than ever to include transparency in those stakeholder discussions. Paula Gori summed it up as follows: “We have to work with all our expertises together”.
From a practitioners point of view, Paula further highlighted the difficulty in assessing whether a text or image is AI-generated, noting that images often have a more significant impact than text. She further described EDMO's initiatives across different member states, including ADMO, BENEDMO, CEDMO, MEDMO, DE FACTO, and NORDIS, among others, which cover various aspects of fact-checking and evaluation using AI tools. She emphasised EDMO's goal of facilitating discussions, knowledge sharing, and avoiding the duplication of work, if at all possible.
The EBU and its members are also undertaking AI-based projects to embrace new possibilities. François provided an overview of use cases from EBU members, including the use of AI tools for hate speech analysis, as well as face and voice recognition for gender balance assessment.
The third panel focused on how generative AI can foster critical thinking among citizens, media professionals, and internet users. The panel brought together experts from various disciplines, including philosophy, psychology, and art, to explore new perspectives in the interaction between media, art, and generative AI within the context of disinformation.
The panel started with a philosophical take on tackling disinformation. This was done by Giannis Stamatellos from the Institute of Philosophy & Technology, also a professor of philosophy at the American College of Greece. He discussed the virtues of character for AI, drawing from Platonic and Aristotelian dialectics such as justice, self-control, moral agency, trust, and honesty. He also highlighted the potential empowerment of citizens through generative AI and collaborative AI, encouraging a shift in perspective from viewing AI as an opponent to considering it as a potential aid.
Professor Antonella Poce, a psychologist from the University of Rome Tor Vergata, highlighted the importance of understanding what critical thinking entails and how to select reliable information, which is strikingly important nowadays, as she pointed out. She furthermore discussed the development of tools for assessing critical thinking, including an AI-based tool for evaluating open-ended questions.
Lastly within this panel, Dejha Ti and Ania Catherine, artists collaborating under the MediaFutures programme presented their project Soft Evidence, which involves creating a synthetic media series depicting events that never happened. They explored the concept of ‘productive confusion’ by dissociating AI-generated content from the representation of reality. They also discussed the challenges of using open datasets, highlighting the potential for non-consensual content to be included.
After the networking lunch, Jochen Spangenberg from Deutsche Welle welcomed the attendees to Panel 4, which focused on technological and strategic approaches to detecting and countering AI-generated content. The panel showcased the work of the four projects that organised the day's event. All are European Commission co-funded research projects that address challenges in the disinformation sphere, specifically in the detection of AI-generated content, also using tools and technologies to support in digital content analysis and its verification.
Riccardo Gallotti from Fondazione Bruno Kessler presented the AI4TRUST project, which aims to enhance the work of human fact-checkers through automated platform monitoring of social and news media using advanced AI-based technologies. He discussed practical experiences, such as the observatory that gathered COVID-19 infodemics and introduced various tools, including a clickbait content analyser and a "Verdict Generator" that could aid in debunking disinformation.
Symeon Papadopoulos from the mever team at Centre for Research and Technology Hellas (CERTH) introduced the vera.ai project, which brings together research capabilities from different fields, fact-checkers, NGOs, and the European Broadcasting Union (EBU). The project aims to develop tools for detecting AI-generated content and employ novel AI and network science-based methods to assist verification professionals. User needs and requirements were highlighted, including deepfake detection and the importance of transparency and explanation of results.
Luisa Verdoliva from the University Federico II of Naples (UNINA) presented some of the studies used as a basis for the vera.ai project. These studies focus on detecting local manipulations in synthetic images, analysing traces left by generative AI on created images, and developing methods for deepfake detection of persons of interest. The presentation also touched on the generation and detection of synthetic text using language models.
Antonis Ramfos from Athens Technology Center (ATC) introduced the TITAN – Technology for Citizens project, which aims to create an AI-based citizen coaching ecosystem against disinformation. The project proposes an intelligent chatbot to guide citizens to logical conclusions about the factual correctness and reliability of statements. The chatbot would consider the citizen's critical thinking capacity, incorporate fact-checking processes and tools, and emphasise a human-centred approach. The project highlights the importance of a multidisciplinary team and co-creation in its approach.
Finally, Filareti Tsalakanidou from the Centre for Research and Technology Hellas (ITI-CERTH), gave insights into the AI4MEDIA project. In its 4-year runtime, he project aspires to become a Centre of Excellence engaging a wide network of researchers across Europe and beyond. It focuses on delivering the next generation of core AI advances and training to serve the media sector. At the same time it puts focus on ensuring that the European values of ethical and trustworthy AI are embedded in future AI deployments.
After the individual project presentations, the panel engaged in lively discussions, also taking in questions from the audience. It concluded by the moderator asking every panelist what they would wish for in an ideal world, limiting this to a one-sentence answer per speaker.
Wishes were quite varied in scale and scope. They included the desire to possess unlimited computing power as well as having easy access to talent, as well as a shift away from exposure maximisation algorithms. Add to that aspects such as intellectual honesty and reproducible results, generative AI living up to expectations, and more shedding of light on the largely ‘black box AI systems’.
The programme ended with cross-cutting conclusions by Kalina Bontcheva, University of Sheffield and scientific director of vera.ai, who summarised the key outcomes of the day and provided a way forward for addressing disinformation through technological and strategic approaches.
The European Commission’s Project Officer of the four co-funded research projects organising the conference, Peter Friess, then concluded the day with his closing words, thanking all participants for fruitful discussions and their support in achieving one of the most important goals of the conference: Connecting various stakeholders to better collaborate in the tackling of disinformation.
The long but rewarding conference day that addressed quite a variety of challenges and opportunities of generative AI for the production as well as countering of disinformation was then concluded with an informal networking session, also featuring nibbles and drinks. It saw content and exhausted faces all around.
Authors: Heini Järvinen (EUDL) & Anna Schild (DW)
Editor: Jochen Spangenberg (DW)