AI Act manages risks, but mind the pitfalls

Major step towards responsible AI

Major step towards responsible AI
  • Blog
  • 09 Jan 2024

According to Mona de Boer, the agreement within the European Parliament on rules to manage risks around artificial intelligence (AI) is an important step towards the responsible use of AI. However, the PwC expert still sees some pitfalls.

Update on progress EU AI Act

Playback of this video is not currently available

0:01:21

Recently, EU countries and the European Parliament reached a provisional agreement on rules for artificial intelligence (AI). Mona de Boer, partner at PwC, explains what this means.

Want to know more? Contact us or read more about using AI responsibly

Read more about Responsible AI

We are one step closer to the adoption of the world's first AI regulation, the EU Artificial Intelligence Act (AI Act). The European Parliament has formally agreed on emerging legal rules to manage AI risks and promote the use of AI consistently with EU values, including human oversight, privacy and non-discrimination.

Built on the foundations of consumer protection and product integrity, the AI Act classifies applications by risk ratio. Organizations with high-risk AI systems must meet a series of digital security requirements for these applications and must continue to demonstrate this.

AI Act aims for innovation without accidents

The development of technological applications at scale was mainly in the 'move fast and break things' mode for a long time. Now, with the upcoming rules and regulations, one of the largest economies in the world signals that it will no longer accept accidents. Mind you, without being in the way of the innovative power of organizations.

How we will we notice the impact of these impending regulations in daily life, is a question that will remain unanswered for a while. Because, although the signal from the EU to the market is loud and clear, the impact of the AI Act in terms of consequences for businesses and governments is not.

Most important pitfalls in the implementation of the AI Act

My 'two cents' with regard to the most important pitfalls in the context of the implementation of the AI Act:

  • Organizations should avoid managing the risks of their business operations rather than the risks arising from their business operations. The former views risks from within the organization, the latter from the outside. They do not necessarily lead to the same risk assessment. The latter - from the outside - is how the AI Act sees 'digital security'.
  • Organizations should not focus on managing the risks they know. Undesirable effects of AI on society are generally the result of risks that were not foreseen in advance. The seven domains of 'digital security requirements' in the AI Act could simply lead to too much administrative compliance, resulting in a false sense of security.
  • 'Hard measures' to ensure digital security, such as technical checks and balances in the development process of AI applications, are only effective if they have a 'soft breeding ground' in the organizational culture. This means that strategic priorities for digital security don’t go hand in hand with only commercial and financial KPIs in the workplace. Anyone who says ‘strategic A’ must say ‘operational B’.

Despite the many unanswered questions around the implementation of the AI Act, organizations must take action in the short term. Without further delay, the AI Act is expected to come into effect in 2024.

Building a smarter and more responsible AI-world

Contact us

Mona de Boer

Mona de Boer

Partner, Data & Artificial Intelligence, PwC Netherlands

Tel: +31 (0)61 088 18 59

Follow us