Transparent but incomprehensible
Investigating the relation between transparency, explanations,and usability in automated decision-making
Time: Fri 2022-09-16 13.30
Location: F3, Lindstedtsvägen 26 & 28, Stockholm
Language: English
Subject area: Human-computer Interaction
Doctoral student: Jacob Dexe , Medieteknik och interaktionsdesign, MID
Opponent: Associate Professor Wanda Presthus,
Supervisor: Henrik Artman, Medieteknik och interaktionsdesign, MID; Ulrik Franke, Medieteknik och interaktionsdesign, MID
QC 20220613
Abstract
Transparency is almost always seen as a desirable state of affairs. Governments should be more transparent towards their citizens, and corporations should be more transparent towards both public authorities and their customers. More transparency means more information which citizens can use to make decisions about their daily lives, and with increasing amounts of information in society, those citizens would be able to make more and more choices that align with their preferences. It is just that the story is slightly too good to be true. Instead, citizens are skeptical towards increased data collection, demand harsher transparency requirements and seem to lack both time and ability to properly engage with all the information available.
In this thesis the relation between transparency, explanations and usability is investigated within the context of automated decision-making. Aside from showing the benefits that transparency can have, it shows a wide array of different problems with transparency, and how transparency can be harder to accomplish than most assume. This thesis explores the explanations, which often make up the transparency, and their limitations, developments in automation and algorithmic decisions, as well as how society tends to regulate such things. It then applies these frameworks and investigates how human-computer interaction in general, and usability in particular, can help improve how transparency can bring the many benefits it promises.
Four papers are presented that study the topic from various perspectives. Paper I looks at how governments give guidance in achieving competitive advantages with ethical AI, while Paper II studies how insurance professionals view the benefits and limitations of transparency. Paper III and IV both study transparency in practice by use of requests for information according to GDPR. But while Paper III provides a comparative study of GDPR implementation in five countries, Paper IV instead shows and explores how transparency can fail and ponders why.
The thesis concludes by showing that while transparency does indeed have many benefits, it also has limitations. Companies and other actors need to be aware that sometimes transparency is simply not the right solution, and explanations have limitations for both automation and in humans. Transparency as a tool can reach certain goals, but good transparency requires good strategies, active choices and an awareness of what users need.