EU AI Act

Regulations and opportunities

Regulations and opportunities

Artificial Intelligence (AI) is a crucial technology for digital transformation in both private and public organisations. It is predicted that by 2030, AI will be an integral part of all processes and products throughout the European value chain. 

Alongside the economic benefits of AI, trust in its performance, security, reliability, and fairness is vital in determining its adoption. Users, customers, and decision-makers require certainty and trust, which can only be achieved through sound governance based on best practices and accepted standards.

To address this need, the European Union has developed and introduced harmonised regulations for AI systems. Known as the "EU AI Act," this comprehensive regulation is applicable to both private and public organisations, regardless of whether they provide or deploy AI systems. The EU AI Act aims to promote the creation of high-quality AI systems through uniform standards while protecting EU citizens.

Playback of this video is not currently available

0:01:21

The EU AI Act emphasises the importance of holistic AI governance, ensuring the development and use of AI systems that are of high quality and transparent throughout their lifecycle. Organisations must proactively implement the requirements of the EU AI Act to comply with legal obligations and mitigate potential risks. Failure to do so may result in fines under the EU AI Act, as well as penalties under the General Data Protection Regulation (GDPR) and industry-specific regulations. Moreover, organisations may face liability claims if they use a defective AI system.

The EU AI Act classifies AI systems into four categories based on their potential risks: unacceptable risk, high risk, limited risk, and minimal risk. Each category has different requirements and obligations imposed on AI systems.

The EU AI Act is expected to be formally adopted soon, with the first set of requirements for unacceptable risk AI systems coming into effect six months after adoption. Subsequent requirements will be phased in towards 2026.

Implementing the EU AI Act presents challenges for organisations, but it also offers opportunities to enhance the quality of applied AI. To learn more about the practical implementation of the complex requirements of the EU AI Act, you can read the whitepaper below.

Read the whitepaper ‘Trustworthy AI’

Learn how you can implement the EU AI Act as a value driver

What is the impact of the EU AI Act on your business?

The EU AI Act will have a significant impact on businesses operating within the European Union. They need to adapt their AI practices to comply with new regulations, potentially impacting operations, market access, transparency, and competitiveness. However, it also presents opportunities for innovation, consumer trust, and alignment with evolving global AI standards.

What are the next steps?

To meet the requirements of the EU AI Act, businesses must follow these steps:

  • Determine the classification of your AI system: Assess the risk level of your AI system according to the Act's classification framework. Identify whether it falls under the category of unacceptable risk, high risk, limited risk, or minimal risk. 

  • Conduct a risk assessment: If your AI system is classified as high risk, perform a thorough risk assessment. Identify potential risks, such as safety concerns, privacy issues, or biases, associated with your AI system. 

  • Ensure compliance with technical requirements: Review the technical requirements specified in the EU AI Act for your AI system's risk category. This may include features like robustness, accuracy, and cybersecurity measures. 

 

  • Prepare documentation: Prepare detailed documentation about your AI system. This should include information on its capabilities, limitations, and potential biases.
  • Implement human oversight: If your AI system is classified as high risk, establish mechanisms for human oversight. 

  • Conduct conformity assessments: For high-risk AI systems, engage in conformity assessments to demonstrate compliance with the EU AI Act. 

  • Update internal processes and policies: Review and update your internal processes, policies, and procedures to align with the requirements of the EU AI Act. 

  • Stay informed and adapt: Keep up-to-date with any updates or changes to the EU AI Act and related guidelines.

How can PwC help?

Our multidisciplinary team of experts can support in your process to become compliant with the EU AI Act:

  • PwC provides assistance in navigating the requirements of the EU AI Act. We have the regulatory expertise and can help interpret the regulations, determine the classification of your AI system, and identify specific requirements. 
  • PwC assists in conducting risk assessments, developing strategies to mitigate risks, and ensuring technical compliance with the Act's requirements. 

  • PwC supports in conducting conformity assessments for high-risk AI systems and helps prepare for the assessment process.

  • PwC helps review and update internal processes, policies, and procedures to align with the Act's requirements, including data governance and ethical considerations. 
  • PwC provides training and education programs to enhance your team's knowledge and capabilities in AI regulation and compliance. 

  • PwC offers ongoing support to monitor AI system performance, address emerging risks, and adapt to any updates or changes in the EU AI Act.

Read more about PwC Responsible AI toolkit for security and ethics

Contact us

Mona de Boer

Mona de Boer

Partner, Data & Artificial Intelligence, PwC Netherlands

Tel: +31 (0)61 088 18 59

Linda Thonen

Linda Thonen

Partner Legal at PwC, PwC Netherlands

Tel: +31 (0)6 397 728 65

Follow us