Generative AI is transforming how professional services are delivered, but how do people actually respond when a machine offers them expert advice? In the article “Beyond Algorithms: A Multifaceted Exploration of Trust, Confidence, and Blame in Generative AI-Assisted Advice”, researchers Nirmalee I. Raddatz, Paul A. Raddatz, Kehinde M. Ogunade, Sohee Kim, and Darsheika L. Williams take a closer look at how users interact with AI-based tax guidance. Published in Accounting Horizons in 2025, the study offers fresh insights into the psychological dynamics that emerge when individuals rely on tools like “TaxAssistAI” instead of a traditional human expert.
The authors designed four controlled experiments involving 374 U.S.-based participants, each of whom had filed taxes in the prior year. These individuals were asked to respond to various tax-related scenarios in which advice came either from a human CPA, the AI tool, or both. By comparing user responses in terms of confidence, trust, willingness to act, and blame attribution, the researchers were able to uncover consistent patterns in how humans perceive and engage with generative AI in advisory contexts.
Across the experiments, people tended to express more confidence in advice when it came from a human expert than when it was delivered by an AI tool. This result aligns with the well-known phenomenon of algorithm aversion, in which users hold machines to higher standards and are less forgiving of their mistakes. Although the difference in confidence between AI and human advice was not always statistically significant, the trend was clear: humans are still seen as more trustworthy in complex, judgment-heavy domains like tax.
Interestingly, labeling the AI tool as “premium” did not increase user confidence. In fact, participants who were not told which version of the tool they were using expressed the highest trust in the advice. This finding challenges common marketing assumptions that premium branding or pricing enhances perceived quality. In the case of tax advisory, it appears that users rely more on perceived relevance and context than on superficial cues about version or cost.
One of the most striking findings came when both the AI and the human CPA gave the same advice. When this alignment occurred, users reported significantly higher confidence in the advice and were more likely to act on it. This suggests that combining AI-generated recommendations with human judgment could be a powerful strategy for increasing trust. Rather than framing AI as a replacement for human expertise, presenting it as a complement seems to create a form of reassurance that neither source can provide on its own.
The fourth experiment explored what happens when advice turns out to be wrong. When the error came from the AI, users tended to blame themselves more than they did when the mistake came from a human. Despite this, they also showed a greater willingness to use the AI tool again in the future, even after a negative outcome. This paradox—more self-blame coupled with more forgiveness of the tool—highlights a key difference in how users process machine versus human failure. AI seems to be viewed less as a responsible actor and more as a neutral tool, shifting the burden of judgment onto the user.
The research carries important implications for the future of tax advisory services and other professional domains increasingly shaped by AI. To build trust and manage expectations, firms need to communicate clearly what AI tools can and cannot do. Framing AI as a collaborative partner, rather than a standalone authority, appears to reduce skepticism and encourage more informed use. Furthermore, the findings underscore the importance of human oversight in AI-supported workflows, especially in contexts where mistakes carry financial or legal consequences.
As AI systems continue to evolve and find their way into fields like auditing, law, and healthcare, the behavioral patterns identified in this study may help organizations design better tools and user experiences. Understanding how people build confidence, assign responsibility, and make decisions in the presence of intelligent systems will be essential for effective human-AI collaboration.
The full article “Beyond Algorithms: A Multifaceted Exploration of Trust, Confidence, and Blame in Generative AI-Assisted Advice” by Nirmalee I. Raddatz, Paul A. Raddatz, Kehinde M. Ogunade, Sohee Kim, and Darsheika L. Williams was published in Accounting Horizons and is available online.