AI: Between a Free-for-All and Hard Boundaries

Starting August 2, 2025, the first implementing measures of the AI Act come into force, introducing transparency requirements for general-purpose AI models and launching a voluntary code of good practices. The United States, by contrast, is betting on rapid innovation with its new AI Action Plan. Two visions compared—with concrete implications for the pharmaceutical world.

0
143

August 2, 2025 marks a pivotal moment for artificial intelligence (AI) in Europe, with the entry into force of key provisions of the AI Act (Regulation EU 2024/1689), the world’s first comprehensive legal framework on AI. At the same time, America’s AI Action Plan, released by the White House in July 2025, outlines an ambitious strategy to cement U.S. leadership in AI through deregulation, infrastructure investment, and global competition—particularly with China.

These two approaches reflect opposing visions: Europe prioritizes safety, transparency, and the protection of rights, while the U.S. emphasizes rapid innovation and technological leadership. For the pharmaceutical industry—which relies on AI for drug discovery, clinical trial management, and production optimization—these differences will have a significant impact.

AI According to Bruxelles: Europe chooses the path of calculated risk

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Unacceptable-risk systems—such as social scoring or emotion recognition in the workplace—are banned as of February 2, 2025. High-risk systems, which include many AI applications in the pharmaceutical and clinical fields, must comply with strict obligations by August 2, 2026. These include:

  • Risk assessment and mitigation: providers must identify and reduce risks before placing the system on the market.
  • Data quality: datasets must be free of bias to prevent discriminatory outcomes.
  • Traceability: activity logging is required to ensure transparency for authorities.
  • Human oversight: mandatory human control to ensure safety and accuracy.
  • Detailed documentation to support compliance: comprehensive information on the system and its intended purpose.

These requirements aim to build public trust, but they also impose significant costs on companies—especially in the pharmaceutical sector, where AI handles sensitive health data. The AI Act also promotes AI literacy, requiring training for both professionals and the general public.

📌 Starting August 2: how the AI Act Changes the Rules

On August 2, 2025, the first operational provisions of the AI Act come into force, with a particular focus on general-purpose AI models (GPAI), which are widely used in the pharmaceutical sector as well.

The four key points:

  • Data Transparency: Obligation to disclose the sources of training datasets (e.g., scientific databases, websites) to ensure copyright compliance and accountability.

  • Systemic Risk Management: High-impact models must be assessed and secured against misuse or misleading outputs.

  • Training and AI Literacy: Mandatory training for citizens and professionals on the proper use of AI.

  • Voluntary Code of Good Practices: Guidelines to support self-regulation, issued by the AI Office.

August 2 is more than just a symbolic date—it’s the first real test of maturity for companies developing and deploying AI in regulated sectors. The pharmaceutical industry is on the front line.

The U.S. Approach: technological supremacy and deregulation

Today, a new frontier of scientific discovery lies before us, defined by transformative technologies such as artificial intelligence… Breakthroughs in these fields have the potential to reshape the global balance of power, spark entirely new industries, and revolutionize the way we live and work. As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. To secure our future, we must harness the full power of American innovation.

Donald J. Trump, 45° and 47° President of the United States
(Preface to America’s AI Action Plan, July 2025)

With these words, Trump launches the new Artificial Intelligence Action Plan, marking a sharp shift in paradigm from the regulatory approach of the Biden era. Artificial intelligence is no longer seen merely as an economic lever, but as a tool of geopolitical supremacy—one to be defended and accelerated, free from regulatory constraints that might slow it down. The objective is clear: to make AI the backbone of American global leadership.

America’s AI Action Plan is built on three pillars: innovation, infrastructure, and international diplomacy. The approach is deliberately pro-business, focusing on speed, competitiveness, and freedom of initiative. Key actions include:

  • Regulatory deregulation: Executive Order 14179, signed by Trump, dismantled many of the restrictions imposed during the Biden administration, creating a more flexible regulatory environment for companies and developers.
  • Open-source and access to computing resources: The plan encourages the development and dissemination of open-source AI models, offering support to startups, universities, and SMEs aiming to participate in the national ecosystem.
  • Strategic infrastructure: Massive investments are planned in the electrical grid, the construction of high-security data centers, and the revitalization of semiconductor manufacturing—seen as strategic assets.
  • Geopolitics and security: The U.S. is tightening export controls on AI technologies to rival nations (primarily China), promoting American technical standards globally and introducing measures against deepfakes.

For the pharmaceutical industry, this approach provides fertile ground for experimentation and rapid AI adoption—from drug discovery algorithms to production optimization and predictive systems for clinical trial management. However, the lack of robust regulatory safeguards raises ethical concerns, particularly regarding the use and protection of sensitive health data. The American plan addresses these risks mainly through cybersecurity measures and industry collaboration, rather than binding regulations.

Diverging cultural and strategic visions

The divide between the EU and the US is also cultural. Europe promotes a human-centric AI, grounded in rights and precaution. The United States prioritizes freedom of expression and market efficiency, avoiding “ideological” interference in technological development.

A key issue is the regulation of GPAI (General-Purpose AI):

  • The EU imposes transparency obligations on training data and risk management.
  • The US promotes GPAI as open source, with minimal restrictions.

In short: Europe builds public trust through regulation, the United States through the market.

A concise comparison of the main regulatory and operational dimensions of the two framework

Dimension UE: AI Act USA: AI Action Plan
Regulatory approach Risk-based, with strict requirements and sanctions Strong deregulatory push, “innovation-first” logic, no ex-ante barrier
Implementation timeline Gradual (2025–2027), first effective deadlines from August 2, 2025 Immediate executive plan, strategic but non-binding
Extraterritorial scope Applies to non-EU operators offering services within the Union ([Skadden][1]) Focused on the U.S. industrial chain, with no external extension
Governance and trasparency Mandatory registration, audits, documentation, human oversight Voluntary transparency; minimum requirements only for declared high-risk models
Sanctions and enforcement Up to 7% of global turnover for serious violations No direct sanctions provided; enforcement left to future guidelines
Energy and environment No specific obligations on sustainability and energy consumption Environmental streamlining to accelerate data centers and digital supply chains

 

Pharma and AI: innovation under scrutiny

For pharmaceutical companies, August 2, 2025, marks the beginning of a new era. On one side, Europe—through the AI Act—imposes increasing responsibilities: traceability, human oversight, and risk management. An AI system used to identify new molecules, for example, will need to demonstrate data accuracy and the mitigation of potential biases. This means higher costs and slower processes, certainly—but also greater trust from regulatory authorities, investors, and citizens.

On the other side, the United States paves the way for speed. Regulatory freedom allows for agile testing and iteration—a clear advantage in terms of time-to-market. Yet it also brings risks if issues of safety, ethics, or data quality arise. Cybersecurity remains a priority, but it may not be enough to offset the lack of a formal regulatory framework.

Between rules and freedom, the balance is called trust

Global companies today navigate between two models:

  • Europe: clear regulations, high standards, slower but more predictable procedures.
  • USA: freedom to experiment, faster processes, but greater reputational risks.

Both systems focus on infrastructure and skills: the EU is strengthening its AI standard, while the US is investing in data centers, semiconductors, and specialized training. Meanwhile, China observes, invests, and responds.

In this landscape, the pharmaceutical industry can play a pivotal role. It stands to gain by positioning itself as an ambassador of responsible AI—capable of combining scientific rigor, innovation, and transparency. But to succeed, it must learn to adapt to different regulatory frameworks without losing sight of the core values of public health.

Governing speed with responsibility

The AI Act and America’s AI Action Plan embody two opposing visions of artificial intelligence: on the one hand, safety through regulation; on the other, leadership through innovation.

August 2, 2025, is not just a regulatory deadline: it is a litmus test for companies that want to be credible, competitive, and transparent. It will not (only) be a matter of compliance, but of reputation, market access, and the ability to lead the healthcare of the future with powerful and responsible tools.