Dealing with digital content: dangers, risks and potential consequences

Having to deal with digital material as a journalist, investigator or researcher can have a number of downsides. One concerns having to view gruesome, potentially traumatizing imagery - and the resulting potential negative consequences this can have on mental health.

It is - or should be - of utmost importance to protect investigators as much as possible from negative impacts when dealing with digital content, guarding their mental wellbeing.  

This mustn't be taken lightly as the risk of secondary or vicarious trauma are evident. (We also wrote about avoidance and coping techniques earlier on this site. See the article “From the digital frontline: guarding investigators' mental health”).

Using AI to detect gruesome imagery and protect investigators

With this as a background, a team of CERTH and DW researchers have come together - this being part of the research project MediaVerse - to test and research in how far Artificial Intelligence can support to 

  • detect gruesome imagery that is potentially disturbing or traumatizing,
  • use and apply a variety of filters that are intended to reduce the gravity of impact of imagery that includes graphic elements,
  • serve as some kind of ‘early warning system’ so investigators do not come across gruesome imagery unprepared,
  • therewith support trauma prevention and guard mental wellbeing. 

Your cooperation is much appreciated - but you should only do so if you really feel up for it (do not take things lightly and carefully consider all warning notes)

Now if you are frequently exposed to potentially disturbing imagery as part of your work, or you come across potentially traumatizing imagery frequently, the CERTH-DW research team would be extremely grateful if you could participate in the study entitled “Mitigating Viewer Impact from Disturbing Imagery using AI Filters”.

This can be done by filling out a questionnaire that can be found here, or by copying the following link: https://docs.google.com/forms/d/e/1FAIpQLSfsLBmtD0KmeWDWaDVXpob396U2lglpcXvZFB34jVPiuB_VXA/viewform  

In the process of going through the questionnaire, you will be asked to rate five different filters applied on potentially disturbing images. Of these, some AI filters aim to transform an image into e.g. black and white drawings, slightly coloured drawings or paintings respectively, while other filters are the sort of ‘classic’ image blur and partial blur filters. 

Different filters are automatically appliedMever team

A very important note of caution

Please beware though and read the instructions at the outset of the linked document carefully: the last thing we would want is you to have any negative impacts because you participate in the study. So if at any point you start feeling uncomfortable going through the questionnaire and examples used, do not at all feel obliged to continue or complete it. If in doubt, do not even start it. 

The research team is very grateful for your participation in the study and filling out the questionnaire if at all possible, but also totally understand if you do NOT participate or abandon it anywhere on the way. The most important thing is to guard your mental wellbeing and NOT trigger anything negative.

Thank you! Take care and stay well!!

 

PS: if you have been contacted before and directly (via mail), please use the link you were sent then, not the one included above.

 

Author: Jochen Spangenberg (DW) - also leading the study on the DW side.

vera.ai is co-funded by the European Commission under grant agreement ID 101070093, and the UK and Swiss authorities. This website reflects the views of the vera.ai consortium and respective contributors. The EU cannot be held responsible for any use which may be made of the information contained herein.