Europe’s AI Act gets a roadmap: Final code of practice sets key rules for general-purpose models

The EU’s final GPAI Code offers a voluntary path to compliance before the AI Act’s August deadline. Find out how AI model providers can prepare now.

The European Commission released the final version of the General-Purpose AI (GPAI) Code of Practice on July 10, 2025, providing a structured, voluntary compliance tool to help AI model developers align with upcoming obligations under the EU AI Act. The Code, designed by a group of 13 independent experts with input from over 1,000 stakeholders—including model providers, academics, SMEs, civil society groups, and copyright holders—aims to clarify and streamline conformance ahead of the law’s general-purpose AI provisions becoming applicable on August 2, 2025.

The AI Act, which was officially adopted in June 2024, introduced sweeping obligations across a wide range of AI applications. However, the most complex and globally consequential elements pertain to general-purpose AI models—high-capacity systems like ChatGPT, Gemini, Claude, or Llama that serve as foundational layers for downstream applications. Enforcement of these provisions will be phased: August 2026 for new models and August 2027 for legacy systems already in the market.

According to institutional investors and analysts tracking European regulatory trends, the Code is being viewed as a stabilizing framework that offers legal certainty, lowers compliance costs, and potentially protects against enforcement risks for early signatories.

What provisions are included in the general-purpose AI code of practice and how do they address regulatory compliance?

The General-Purpose AI Code of Practice is structured across three core chapters—Transparency, Copyright, and Safety and Security—each of which addresses specific elements outlined in the AI Act. These chapters differentiate between general obligations applicable to all providers and heightened requirements for only those developing the most advanced and potentially high-risk AI models.

The Transparency chapter includes a standardized Model Documentation Form, which enables AI providers to organize and publish key details about their systems. These details include training data sources, intended uses, performance benchmarks, and licensing policies. This documentation framework is designed to support downstream system developers, who rely on foundational model transparency to comply with their own AI Act obligations.

The Copyright chapter provides concrete guidance on how developers can align their practices with European Union copyright law. This includes implementing policies that prevent regurgitation of copyrighted material, responding to rights-holder complaints, and putting in place clear terms for dataset curation and attribution.

The Safety and Security chapter is reserved for developers whose models are deemed to pose “systemic risks,” such as enabling the development of chemical weapons or inciting loss of human oversight. This section requires advanced model providers to conduct risk assessments, deploy post-deployment monitoring systems, and engage in adversarial testing and auditing to mitigate misuse.

These chapters form the technical and procedural basis upon which voluntary signatories can demonstrate alignment with Article 53 and Article 55 of the AI Act.

Which AI model developers are expected to adopt the code and what are the benefits of voluntary compliance?

The scope of the Code explicitly targets developers of general-purpose AI models, which are broadly defined as models trained on large datasets and capable of supporting a wide range of downstream applications. This includes—but is not limited to—models developed by OpenAI, Google DeepMind, Meta, Anthropic, Mistral, and Cohere, as well as open-source contributors within the European AI ecosystem.

Signatories to the Code are expected to benefit from reduced administrative burdens and improved legal certainty compared to those who attempt to prove compliance through ad hoc or case-specific methods. According to the European Commission, voluntarily signing the Code will be recognized as a valid form of demonstrating compliance with the AI Act once enforcement begins.

For smaller developers or those operating in niche domains, signing the Code may also provide reputational advantages, market access clarity, and simplified interactions with downstream integrators that are themselves regulated under the AI Act.

What deadlines are associated with the AI Act and what timeline does the code aim to support?

The AI Act’s general-purpose AI provisions officially enter into application on August 2, 2025. For newly developed models, enforcement begins one year later—on August 2, 2026. For models that were already available on the European market prior to the application date, enforcement begins August 2, 2027. These grace periods were introduced to give model developers sufficient time to assess their systems, implement transparency protocols, and establish risk mitigation mechanisms.

The Code of Practice, published less than a month before the application date, is intended to serve as an actionable interim framework. Once endorsed by the Commission and EU Member States, signatories can use the Code to structure their documentation and internal compliance programs in a way that aligns with the enforcement schedule.

Further clarification will be offered through upcoming Commission-issued guidelines, which are expected to define key regulatory terms such as what constitutes a general-purpose model, how systemic risk is evaluated, and who bears responsibility across supply chains.

How has the AI industry reacted to the code and are there concerns about its scope or enforcement?

Industry response to the Code has been mixed. While European institutions frame the Code as a collaborative success built through multistakeholder dialogue, several major tech firms and trade associations have voiced concern.

A coalition of over 40 European businesses—including Airbus, ASML, BNP Paribas, Mistral, and Siemens—recently urged the Commission to delay GPAI-specific enforcement by two years, arguing that the current compliance framework remains too vague and difficult to implement without finalized guidance.

The Computer & Communications Industry Association (CCIA Europe), a leading tech trade body, criticized the final Code as overly prescriptive and legally ambiguous. CCIA representatives pointed to concerns about burdensome external audits, the inclusion of systemic risk obligations for open-source models, and the absence of finalized guidelines that define exactly which models are in scope.

Despite this, the European Commission maintains that the Code is flexible and pragmatic, crafted to reflect the collective input of rights holders, developers, and safety experts. According to Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy, the publication of the Code marks an important step toward making powerful AI models both innovative and safe.

What are analysts and investors saying about the code’s impact on compliance and strategic positioning?

Institutional investors and analysts have interpreted the final Code as a positive signal for regulatory predictability within the European AI market. They argue that voluntary signatories are likely to gain a first-mover advantage, benefiting from clarity on documentation, lowered enforcement risk, and a streamlined approach to cross-border compliance.

From a market access perspective, early adoption may also facilitate smoother relationships with downstream integrators in finance, healthcare, and public sector deployments, where documentation and model explainability are already regulatory expectations.

However, analysts also caution that enforcement challenges remain. With many critical terms still undefined—including how the AI Office will categorize systemic risk—AI startups and global developers alike are expected to proceed cautiously until the accompanying guidelines are published later this month.

What is the long-term outlook for general-purpose AI providers under the EU’s AI regulatory framework?

Going forward, general-purpose AI model providers will face a progressively tightening compliance landscape. In the short term, those who voluntarily adopt the Code must begin building internal systems for documentation, safety testing, and copyright policy enforcement. For providers whose models are flagged as posing systemic risk, the compliance burden will also include impact assessments, continuous monitoring, and third-party evaluations.

Over the next 18 to 24 months, AI developers operating in Europe—or seeking to distribute general-purpose models in the EU market—must adapt their product pipelines to account for audit readiness, transparency form submissions, and traceable datasets. The Commission has signaled that noncompliance may trigger investigations or fines of up to €35 million or 7 percent of global annual turnover, whichever is higher.

Despite lingering ambiguities, analysts anticipate that adherence to the Code will become a de facto expectation for any serious market participant. As the EU moves from guidance to enforcement, voluntary compliance may no longer be viewed as optional, particularly in high-risk domains.

What are the strategic implications of adopting the EU’s general-purpose AI code for global providers?

The final General-Purpose AI Code of Practice is best understood as both a carrot and a stick. On one hand, it offers early adopters a clear pathway to demonstrate alignment with the AI Act while avoiding heavy documentation burdens. On the other, it implicitly defines the regulatory benchmarks that will guide EU enforcement starting next year.

For global players, this means EU-compliant development is no longer a theoretical challenge—it is now a competitive necessity. The flexibility built into the voluntary structure may enable smaller innovators to navigate compliance at lower cost, but the next wave of Commission guidelines will determine whether that flexibility endures.

The real test will come not with the Code’s publication, but with its implementation, as developers move from regulatory paperwork to live product governance.


Discover more from Business-News-Today.com

Subscribe to get the latest posts sent to your email.

Total
0
Shares
Related Posts