EU AI rules seen as potential innovation killer by German firms

TAGS

A new survey by Deloitte reveals a significant lack of preparedness among German companies for the European Union’s Artificial Intelligence Act (EU AI Act), which came into effect in August 2024. The survey shows that 48.6% of German companies have not engaged seriously with the regulatory requirements. Only 26.2% of companies have started preparations for compliance, exposing a major gap in readiness as the EU aims to enforce one of the world’s most stringent regulatory frameworks governing artificial intelligence. This lack of readiness is particularly concerning given the significant penalties for non-compliance, which could reach as high as 7% of a company’s global annual turnover or €35 million, whichever is higher.

The EU AI Act: A new global standard or an innovation killer?

The EU AI Act represents a groundbreaking shift in AI governance globally, introducing the first comprehensive framework to regulate artificial intelligence systems. The Act takes a risk-based approach, categorizing AI systems into four levels: Minimal, Limited, High, and Unacceptable risk. Systems classified under “high risk,” such as those used in recruitment, healthcare, or law enforcement, must meet strict standards, including thorough documentation, risk assessments, and data governance protocols. At the extreme, systems categorized as posing an “unacceptable risk,” like those used for social scoring or subliminal manipulation, are completely banned. These stringent measures are designed to protect fundamental rights but have also raised concerns about their impact on innovation.

See also  Sonata Software announces new delivery centre in Poland to boost European presence

Deloitte’s survey found that 52.3% of German companies believe the AI Act will constrain their innovation capabilities. This concern reflects a broader apprehension that the EU AI Act may impose significant obstacles rather than create opportunities for businesses to develop new AI technologies. While only 18.5% of companies anticipate a positive impact on innovation, 47.4% perceive the regulation as more of a hindrance to developing AI-based applications than a facilitator. This tension between regulatory oversight and technological advancement remains a critical debate as companies assess the future landscape of AI development in Europe.

A slow path to compliance: Strategic actions needed

Despite the urgency of the situation, many companies have yet to take concrete steps to align with the EU AI Act. The Deloitte survey reveals that only 7.5% of companies have established a dedicated task force to tackle AI compliance, 9.1% have assigned the task to a specific department, and 17.6% have initiated specific projects to address these new requirements. More than half—53.8%—of companies have not initiated any measures at all, highlighting a widespread lack of engagement and potentially setting themselves up for severe penalties when the Act becomes enforceable by August 2026.

Experts from PricewaterhouseCoopers suggest that companies must urgently incorporate AI compliance into their existing governance, risk management, and compliance (GRC) frameworks. The key to navigating the AI Act, they argue, is a phased approach: conducting a comprehensive gap assessment, developing tailored compliance strategies, and implementing changes systematically. This would involve defining specific actions and milestones, aligning compliance efforts with existing structures, and creating a clear roadmap to achieve compliance.

See also  Larsen & Toubro to distribute Kemroc cutting products in India

Divided opinions on trust and legal certainty

The Deloitte survey also highlights mixed reactions within the industry about the potential benefits of the EU AI Act, particularly in terms of trust and legal certainty. Approximately 39% of respondents expect the regulation to provide greater legal clarity, but 35% disagree, and 26.3% remain undecided. A similar divide is evident regarding trust in AI technologies: 34.9% believe that the AI Act will increase public trust in AI, while 30.8% do not see any positive impact, and another 34.3% are undecided. These divisions indicate a broader uncertainty within the industry about whether the Act will enhance market stability or impose unnecessary restrictions that stifle growth.

According to legal experts from DLA Piper, the EU AI Act’s risk-based approach could set a global standard, influencing AI regulations in other jurisdictions such as the United States and Canada. The regulation requires companies to take a comprehensive approach to risk management and compliance, ensuring they are prepared to meet both current and future regulatory demands. For businesses operating within or in partnership with the EU, early compliance could be a strategic advantage, safeguarding against potential fines and positioning them as leaders in ethical AI practices.

See also  ENHERTU receives EU approval for advanced non-small cell lung cancer

The global context: Aligning with the EU AI Act

The challenges faced by German companies are part of a broader global struggle to align AI development with emerging regulations. Deloitte’s global study, ‘State of GenAI in the Enterprise,’ which surveyed nearly 2,800 managers worldwide, found that compliance with regulations, risk management, and the absence of robust governance models are among the most significant barriers to AI adoption. Only 23% of companies globally feel adequately prepared to manage these challenges, underscoring the pressing need for comprehensive frameworks that balance innovation with regulatory compliance.

Sarah Becker, Deloitte’s Digital & AI Ethics Lead, explains that while German companies are generally accustomed to regulatory compliance—especially in highly regulated sectors like finance and healthcare—the rapid evolution of AI and its associated risks present a unique challenge. Companies must pivot quickly from traditional compliance approaches to new, more dynamic models that can accommodate the fast-paced nature of AI technology development. Becker’s comments reflect the urgency for German firms to integrate AI governance into their core strategies, or risk falling behind in both compliance and innovation.


Discover more from Business-News-Today.com

Subscribe to get the latest posts sent to your email.

CATEGORIES
TAGS
Share This

COMMENTS

Wordpress (0)
Disqus ( )