Why Offering Algorithmic Explanations Might Do More Harm Than Good

In a world increasingly shaped by algorithmic decision making, transparency is often seen as the antidote to public skepticism. Policymakers and companies alike have embraced initiatives that give consumers access to explanations about how algorithms operate. But a new study titled Unintended effects of algorithmic transparency: The mere prospect of an explanation can foster the illusion of understanding how an algorithm works by Massimiliano Ostinelli, Andrea Bonezzi and Monika Lisjak challenges this widely accepted belief. Published in the Journal of Consumer Psychology, the research reveals that simply knowing an explanation is available, even without reading it, can make people feel as though they understand how an algorithm works. This perceived understanding, however, is often an illusion.

At the center of this phenomenon is the psychological concept of empowerment. The authors argue that when individuals are told an explanation exists, they experience a sense of control. This feeling can generate an illusory sense of understanding even if they never engage with the actual content. In other words, the availability of an explanation acts as a cognitive shortcut, giving consumers confidence in their understanding and decisions despite having little or no real insight into the algorithm’s inner workings.

Across five carefully designed experiments, the researchers found consistent support for this idea. In the first experiment, participants who were shown a link to an explanation of a credit score algorithm reported higher understanding than those who were not, even though they did not click the link. The second experiment revealed that this false sense of understanding could alter behavior. Participants were more likely to choose a less accurate robo advisor simply because it offered an explanation, illustrating how perceived transparency can outweigh actual algorithmic performance in consumer decision making.

To ensure that the effect was not driven by brand trust or corporate transparency, the third experiment used a well known national bank as the context and found that perceived empowerment, not trust, was the driving factor behind the illusion of understanding. The fourth experiment compared explanations designed for experts versus those for laypeople. Surprisingly, explanations targeted at laypeople created a stronger sense of empowerment and a greater illusion of understanding, despite being less technical. This finding suggests that making information feel accessible can amplify confidence without improving comprehension.

In the fifth and final experiment, the authors showed that when an explanation was described as not useful for making better decisions, the empowerment effect diminished. This provided further evidence that the illusion is tied not merely to the presence of an explanation but to its perceived usefulness in guiding action.

These findings have significant implications for digital interface design and regulatory policy. While many assume that providing access to explanations will make consumers more informed and cautious, the opposite may be true. When people feel empowered by the mere existence of an explanation, they may become overconfident in their understanding and make decisions based on perceived rather than actual knowledge. In contexts like healthcare, finance or legal systems, such misplaced trust could lead to serious consequences.

Current regulatory frameworks such as the European Union’s General Data Protection Regulation require that companies inform consumers of their right to explanation. This study suggests that such policies, while well intentioned, may inadvertently foster a false sense of security and promote blind reliance on algorithmic outputs. Companies and regulators may need to go beyond making explanations available and instead ensure that they are not just accessible but engaged with and understood.

The full article by Ostinelli, Bonezzi and Lisjak is available in the Journal of Consumer Psychology and can be accessed here.