In the World Economic Forum article “Europe is lagging in AI adoption – how can businesses close the gap?”, authors Cathy Li and Andrew Caruana Galizia warn that Europe risks losing ground in the global race for artificial intelligence. Despite strong regulatory leadership and political ambition, European companies remain slow in scaling meaningful AI projects compared to competitors in the United States and Asia. This discrepancy, the authors argue, threatens Europe’s long-term competitiveness and calls for urgent action from business leaders, policymakers, and governance professionals alike.
The article notes that adoption rates are uneven across Europe. Nordic and Benelux countries such as Denmark, Finland, and the Netherlands are leading the way, with Denmark showing enterprise AI adoption of nearly 28 percent compared to the EU average of only 13.5 percent. In contrast, countries in Eastern Europe remain significantly behind, with Romania reporting adoption rates as low as 3 percent. These disparities not only reveal a fragmented landscape but also risk reinforcing structural divides across the continent.
The authors emphasize that Europe’s focus on regulation, while valuable, is not sufficient to accelerate adoption. The EU AI Act of 2024 introduced a harmonized framework for risk classification, transparency, and governance, including the creation of the European Artificial Intelligence Board. While this legislation sets global standards for responsible AI, regulation alone will not create competitive advantage. The real challenge lies in ensuring that firms have the infrastructure, data, and skills needed to deploy AI effectively. Without significant investment and cross-border collaboration, European organizations may comply with rules but still fall behind in innovation and market impact.
Several factors are highlighted as essential levers for progress:
- Small and medium-sized enterprises need targeted support, since they lack the resources to invest in AI talent and infrastructure.
- Europe must expand AI infrastructure, such as supercomputing capacity and “AI gigafactories,” to reduce dependence on foreign providers.
- Access to high-quality, standardized data must be improved, since AI systems are only as good as their training material.
- Regulatory harmonization across member states is necessary to prevent fragmentation and reduce compliance burdens.
The article also points to cultural and strategic differences in how Europe approaches AI compared to other regions. While the U.S. invests heavily in foundational AI models and the microchips that power them, many European businesses concentrate on applied innovation and adapting existing models to industry-specific use cases. This incremental approach reflects Europe’s strengths but also deepens dependency on technology developed elsewhere. The Nordic countries demonstrate that success depends on openness, with collaborative ecosystems that bring together corporates, startups, and international players.
Another challenge is maturity. Fewer than 1 percent of companies worldwide have fully operationalized responsible AI practices, and in Europe more than 60 percent of firms are still in the earliest stages of AI maturity. This means that governance frameworks, ethical oversight, and transparent risk management are often underdeveloped. Internal auditors and governance professionals therefore face an urgent task: to close these gaps before AI adoption accelerates further. Key risk areas they must monitor include:
- Data governance and quality, ensuring that models are trained on representative and reliable datasets.
- Model bias and discrimination, which can undermine trust and expose organizations to legal challenges.
- Regulatory compliance, especially with the new EU AI Act requirements.
- Ethical oversight, transparency, and explainability of AI-driven decisions.
For corporate leaders, the implications are clear. Lagging AI adoption can lead to competitive disadvantage, slower innovation, and missed efficiencies, while rushing adoption without oversight can create compliance failures, reputational damage, and ethical risks. Internal audit functions therefore play a pivotal role, bridging the gap between technological innovation and governance requirements. They can help ensure that AI initiatives are not only implemented, but also aligned with corporate strategy and supported by sound risk management.
The authors conclude that Europe cannot afford to remain a follower in the AI race. Closing the gap requires more than regulation: it demands investments in infrastructure, targeted SME support, improved access to data, and harmonized governance frameworks. Businesses must embed AI into their core operations, policymakers must foster competitive ecosystems, and internal auditors must act as trusted advisors to ensure that AI adoption is both responsible and effective.
The article “Europe is lagging in AI adoption – how can businesses close the gap?” by Cathy Li and Andrew Caruana Galizia was published on the World Economic Forum site and can be accessed here.
