AI Risks Unveiled: How to Safeguard Your Business in the Age of Generative AI

Artificial Intelligence (AI) has taken a front seat, steering innovation and operational efficiencies across industries. However, with great power comes great responsibility, and the rapid adoption of generative AI (GenAI) is no exception. A recent survey conducted by Gartner, Inc. reveals a significant shift in the audit landscape, spotlighting AI-related risks as the area seeing the most considerable increase in audit coverage for 2024.

As organizations increasingly embed new AI technologies into their operations, internal auditors are expanding their scope to include a myriad of AI-related risks. These risks range from control failures and unreliable outputs to advanced cyber threats. According to Thomas Teravainen, a research specialist with Gartner for Legal, Risk & Compliance Leaders, „Half of the top six risks with the greatest increase in audit coverage are AI-related.“ This underscores the pressing need to address and mitigate the potential pitfalls associated with AI implementation.

The Gartner survey, which engaged 102 chief audit executives (CAEs) in August 2023, aimed to rate the importance of providing assurance over 35 risks. Among these, strategic change management, diversity equity and inclusion, and organizational culture were highlighted alongside AI-enabled cyber threats, AI control failures, and unreliable outputs from AI models. This diversity in audit coverage reflects the complex and interconnected nature of risks that organizations face in today’s digital age.

One of the most striking findings from the survey is the confidence gap among internal auditors regarding their ability to provide effective oversight on AI risks. A minimal percentage of respondents felt „very confident“ in their capability to assure coverage over the top AI-related risks, signaling a pressing need for enhanced skills, knowledge, and tools to navigate the AI risk landscape effectively.

The stakes are high, with GenAI applications, both publicly available and developed in-house, posing new and heightened risks in data and information security, privacy, IP protection, copyright infringement, and the reliability of outputs. The widespread use of GenAI, particularly in customer-facing business units, amplifies the importance of addressing unreliable outputs from AI models, such as biased or inaccurate information, to safeguard organizations from reputational damage or legal consequences.

This surge in AI-related audit coverage is a response not only to the inherent risks but also to the significant impact AI is expected to have on organizations in the coming years. With CEOs and CFOs ranking AI as the technology most likely to influence their organizations, addressing the confidence gaps among CAEs becomes crucial to meet stakeholder expectations and navigate the future of risk management.

Find out more here.