vera.ai research presented at international conference on human-computer interaction

The way a user interacts with a digital tool, and the way that this interaction was designed, are crucial to its successful use. In particular, when it comes to AI-based tools for fact-checking, the user needs to be able to trust their tool and the tool needs to be designed in such a way as to earn and afford this trust. This principle is called trustworthiness by design.

Over the three years’ duration of vera.ai (yes, we are nearing the end), Anna Schild and Eva Lopez (both from DW) and Lalya Gaye (EBU) set out to define how to best achieve this trustworthiness by design in AI-based fact-checking tools. Using a participatory methodology, they devised a framework based on exploring user needs and evaluating tools produced in the project, together with these very users.

The result of this research culminated in the definition of a design framework for the development of trustworthy AI-based fact-checking tools, which developers can follow as a checklist. The respective research and approach was recently presented online at the HCI International Conference 2025. The conference – which took place in Göteborg, Sweden –  focused in particular on the human-computer interaction aspect of artifical intelligence.

Results will be elaborated on in more detail in a forthcoming paper, to be published soon by Springer. Details: Gaye, L., Schild, A., Lopez, E. (2005). Designing for trustworthiness in AI-based fact-checking services. In Proceedings of HCI International 2025.

Some key takeaways from the paper include the following:

  • Usability must reach a level of effortlessness conductive to maximum speed of use,
  • AI-models and analysis results must feel familiar to users, using a language based on their way of understanding the world,
  • Users must be able to cross-check how the model came to a conclusion, i.e. for results to be verifiable,
  • Users must be able to easily assess uncertainties to determine the results’ reliability,
  • Tools should provide different levels of use: from basic to advanced,
  • Combining the use of several tools should happen seamlessly, without the need to repeat certain actions,
  • Iterativity and participation of stakeholders are essential,
  • Reliable and accurate AI models are not enough: built-in trustworthiness is necessary.

Authors Anna Schild and Lalya Gaye will also present their finding and raise related matters with an interested audience in a forthcoming EBU webinar that takes place on 3 September 2025 at 11 am CET. It is called Co-creating veraAI fact-checking tools for and with media professionals. Anyone can join, but prior registration is required.

If you want to find out more about this innovative research, join the webinar or look out for the publication once it is available.

Author: Lalya Gaye (EBU)

Jochen Spangenberg (DW)

 

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.