
The AI Act, approved by the European Parliament on 13 March 2024 and published in the Official Journal of the EU as Regulation (EU) 2024/1689, represents the first comprehensive global legal framework on the regulation of artificial intelligence.
Here’s what we’re talking about:
AI Act, an epochal change
Classification of AI systems and obligations
First operational directives
Sistems at unacceptable risk
Mandatory training
The other deadlines
European Commission guidelines
Sever penalties for violators
What changes in Pharma sector
Strategic opportunities
AI Act, an epochal change
The AI Act marks a radical change for all companies that use or develop artificial intelligence technologies. The main objective of the legislation is to ensure the safety of European citizens, protect fundamental rights and promote the responsible development of these technologies, without hindering technological progress and innovation.
The regulation concerns all companies that develop, distribute, import or use artificial intelligence systems within the European market. This includes:
- IA system providers
- Distributors and importers
- High-risk users of AI systems
- Providers of general purpose AI models (e.g. GPT)
-
AI system integrators, who combine different AI modules to create new systems.
Classification of AI systems and obligations
To help companies understand how to navigate this new regulatory landscape, the AI Act divides AI systems into four risk categories:
- Unacceptable risk: this category includes those systems considered dangerous for people’s rights and freedoms. For example, systems that manipulate people’s behavior subliminally, social scoring based on individual behavior and real-time remote biometric identification systems in public spaces are prohibited, except for specific security-related exceptions.
- High risk: here we find systems used in critical sectors, such as healthcare, transportation, human resources and sensitive infrastructure. These systems can be used, but only by respecting very strict rules: companies must carry out preventive impact assessments on fundamental rights, adopt quality management systems and provide constant controls.
- Limited risk: this category includes systems such as chatbots and AI-based text or image generation tools. For these, an obligation of transparency is required: the user must always be informed that he is interacting with an artificial intelligence.
- Minimal risk: it includes AI systems that do not pose significant risks to citizens. There are no specific obligations, but companies are encouraged to follow safety and ethical principles.
First operational directives
From 2 February 2025, some of the measures provided for by the AI Act came into force:
- An absolute ban on the use og AI systems classified as having unacceptable risk has come into force.
- Companies have an obligation to adequately train employees involved in decision-making processes and in the use of AI systems, so that they are fully aware of the risks and how to use these tecnhologies responsibly.
Sistems at unacceptable risk
From 2 Febbruary, the use of artificial intelligence systems considered to ebe at “unacceptable risk” is prohibited. These include practices such as:
- Systems that manipulate human behavior through subliminal or deceptive techniques.
- Systems that exploit specific vulnerabilities of individuals or groups, for example based on age, disability or socio-economic status.
- Systems that classify people based on social behavior or personal characteristics, leading to unfavorable treatment.
- Systems that assess or predict the risk of criminal behavior based on personality traits.
- Systems that create or expand facial recognition databases through the untargeted collection of images from the web or surveillance footage.
- Systems that infer emotions in the workplace or educational environments, except for medical or safety needs.
- Systems that categorize individuals based on biometric data to infer sensitive information such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
- Systems that collect biometric information in real time in accessible public spaces for law enforcement purposes, except in limited circumstances.
Mandatory training
From 2 February, companies must ensure that their staff involved in using or operating AI systems have a sufficient level of “AI literacy“. This involves providing adequate training so that employees understand the opportunities and risks associated with AI, as well as the potential harm that such systems can cause.
Article 4 of the regulation establishes that the requirement applies to both providers and users of AI systems, regardless of the level of risk. Non-EU companies operating in the European market are also affected.
AI literacy includes specific skills:
- Knowledge of AI technologies (learning machines, neural networks, LLM).
- Application understanding in different business sectors.
- Critical thinking about algorithmic risks and biases.
-
Awareness of legal obligations regarding AI.
Training must be personalized:
- Basic for all employees on general principles.
- Advanced for managers and IT managers, focused on compliance and risk management.
- Specialized for specific teams (e.g. HR for bias in selection systems).
The other deadlines
- 2 May 2025: adozione dei codici di condotta volontari per facilitare la conformità.
- 2 August 2025: applicazione delle regole per i sistemi di IA a uso generale.
- 2 August 2026: applicazione completa delle norme per i sistemi di IA ad alto rischio.
- 2 August 2027: conformità per i sistemi IA integrati in prodotti regolamentati (es. dispositivi medici e farmaceutici).
European Commission guidelines
To support companies in applying the new rules, the European Commission published practical guidelines on 4 February 2025.
These documents offer clarifications and case studies to help businesses understand which practices are prohibited and how to ensure compliance.
Download the official guidelines
Sever penalties for violators
Companies that do not comply risk very heavy sanctions: we are talking about fines that can reach up to 35 million euros or 7% of the annual global turnover, depending on which figure is higher.
The checks will be carried out by national authorities, under the coordination of the European AI Office, to ensure compliance with the rules uniformly across the European Union.
What changes in Pharma sector
The pharmaceutical sector is undoubtedly one of the most exposed to the provisions of the AI Act. This sector, in fact, has long made use of artificial intelligence in various phases of the production cycle: from the research and development of new molecules to the distribution of drugs.
A first critical area is that of medical devices and diagnostics. The use of AI tools to support doctors in diagnosis, monitoring patients or making clinical decisions is increasingly widespread. However, these systems will be classified as high risk, and therefore companies will have to undergo rigorous certification processes, implement impact assessments on fundamental rights, manage risks in a structured way and set up a surveillance system even in the post-market phase.
Pharmaceutical manufacturing is also experiencing a profound transformation thanks to AI-based advanced automation solutions. The systems that supervise the production lines will have to ensure traceability and high safety standards, to ensure that the drugs comply with GMP (Good Manufacturing Practices) protocols, thus avoiding production blocks and non-compliance.
In terms of managing clinical and genomic data, artificial intelligence today allows us to analyze large amounts of data from clinical trials and genetic studies. However, this entails the obligation to guarantee the protection of sensitive data, to ensure transparency in the management of information and to adopt tools that avoid discrimination and algorithmic prejudices.
Finally, no less relevant is the impact on distribution and supply chain. Pharmaceutical companies already use AI-based predictive systems to optimize logistics and predict possible stock shortages. These tools will need to be designed to ensure reliability and complete traceability, so as to avoid the risk of interruptions in the supply of vital medicines.
For pharmaceutical companies, compliance with the AI Act represents not only a regulatory obligation, but also an opportunity to strengthen the trust of doctors, patients and regulatory authorities, consolidating their competitive positioning on the market.
Strategic opportunities
The AI Act inaugurates a new regulatory season for European companies, imposing stringent rules on the use of artificial intelligence.
Companies must move in time, adopting control and training systems to ensure compliance, thus transforming an obligation into a strategic opportunity. Integrating principles of transparency, security and reliability into your business can in fact translate into a competitive advantage: the trust of customers and business partners will be strengthened. Furthermore, obtaining certification of compliance with European standards could become a fundamental requirement for accessing new markets and attracting investors.