Three no-regret moves to explore AI business potential and regulatory impact at the same time

The EU Artificial Intelligence Act is here: now what?

  • Blog
  • 17 May 2024
Mona de Boer

Mona de Boer

Partner, Data & Artificial Intelligence, PwC Netherlands

With the passage of the EU Artificial Intelligence Act, exactly three years after its first draft, organisations now face the challenge of understanding the business impact of this new regulation and determining appropriate measures to take. What contributes to this dynamic is that for a majority of organisations, thinking about the risk and compliance implications of AI is coinciding with the exploration of its business potential. PwC expert Mona de Boer shares three so-called no-regret moves to deal with both.

Exactly three years after the first version, the EU AI Act is finally a fact. The new European law aims to ensure responsible and ethical use of artificial intelligence while encouraging innovation and competition. The introduction of the law raises some question marks within organisations. For most companies, thinking about the risks and regulatory implications of AI coincides with exploring its business potential. In daily practice, this often leads to the question of which of the two – risks and compliance or exploring options – should be clarified first to provide direction for the other. In reality, they are two sides of the same coin. Therefore, as an organisation, you can benefit from some immediate no-regret actions while exploring both the possibilities and risks of AI for your business operations.

The EU Artificial Intelligence Act is here: what now?

No-regret move #1: Map your landscape of current and expected AI applications

  1. Top-down: define current and foreseeable business opportunities and issues and compare these with the potential that (generative) AI technology offers. The outcome: your top-down defined AI use cases.
  2. Bottom-Up: do a brainstorming session with a proper representation of relevant business functions to identify potential AI use cases. The success factor in brainstorming is not overthinking it. The outcome: your bottom-up defined AI use cases.
  3. Combine both categories of AI use cases and plot these against two dimensions: 
    • overall business impact and
    • implementation effort required. 
  4. Highlight your ‘quick wins’ (high business impact, low implementation effort) and ‘high potentials’ (high business impact, high implementation effort). The outcome: your strategic landscape of AI applications.
  5. Create an inventory of your current AI applications, in use and in development, and add them to the strategic landscape of AI applications. Don’t forget third-party applications.

The inventory should at least capture:

  • the purpose and intended use of each AI system
  • the data it uses
  • its core functionality / workings
  • the processes, functions and (in)direct stakeholders it affects.

Result: a robust starting point for an AI strategy and a regulatory impact analysis.

No-regret move #2: Raise awareness and upskill employees

For every job, function or role out there, the question is not if AI will change it, but when. Not yet having an AI strategy is not a sufficient reason to wait to offer employees upskilling opportunities or create a safe learning environment in which they can build skills in using AI and dealing with the risks of the technology. The latter is especially important because employees can start working with (generative) AI on their own initiative. Agile is the key word here. Applying the latest generation of AI technology is like learning to work with a new (wired) colleague: you have to spend time together to get attuned to each other. Preferably this has happened before the collaboration hits daily business reality.

What should the upskilling focus on for now:

  1. Introduction to (generative) AI and its principles: This topic provides an overview of (generative AI) and explains its fundamental principles and applications. Employees will learn to understand the potential benefits and challenges associated with using (generative) AI.
  2. Responsible use of (generative) AI: This topic highlights the importance of responsible AI use. Employees learn about ethical considerations, including bias, fairness, privacy, and transparency, in the context of AI applications. They will gain an understanding of the need to ensure that AI systems are developed and deployed in a responsible and accountable manner, in accordance with new legal requirements under the AI Act.
  3. Prompt engineering: This topic focuses on the concept of prompt engineering, which involves designing effective prompts or instructions to direct the behaviour of a Generative AI model. Employees will learn how to craft prompts that produce desired outputs while avoiding unintended biases or undesirable outcomes. They will gain an understanding of the significance of prompt engineering for achieving reliable and ethical AI results.

By covering these three key topics, organisations can provide employees with a comprehensive understanding of (generative) AI, responsible AI use, and the importance of prompt engineering for effective and ethical AI application.

Result: an equipped workforce to execute the (future) AI strategy, to handle AI responsibly, and to shape, implement and comply with legal requirements.

No-regret move #3: Implement responsible use guidelines

Responsible use of AI revolves around desired business conduct. Firstly, it requires awareness and clarity about what that is and secondly, the ability to recognise the associated risks in practice and to respond effectively to them. Organisations should establish simple but clear and workable responsible use guidelines. These guidelines address what should always be done and/or what should never happen (i.e. the ‘non-negotiables’) when it comes to use of AI and data.

To determine the working principles for daily use, organisations can draw inspiration from the ethical AI principles, such as transparency, accountability, human oversight, social and ecological well-being, as formulated in 2019 by the High-Level Expert Group of the European Commission. These principles provide broad guidance and usually need to be further operationalised to be workable in daily practice.

When developing these guidelines for responsible use for the organisation, it is important to find an appropriate balance between setting boundaries and offering freedom for innovation within the organisation. After all: no innovation no risk, no risk no innovation.

Result: clear criteria to guide the AI strategy and its execution, end-to-end through the organisational AI lifecycle.

Take action as you further explore AI

As the opportunities and risks of AI evolve at an unprecedented pace, the motto ‘progress over perfection’ is as vibrant as ever. The question for many organisations is not if they will be affected by AI, but when. These three No Regret Moves help organisations get moving as they navigate.

Read the whitepaper ‘Trustworthy AI’

Learn how you can implement the EU AI Act as a value driver

Mona de Boer

Mona de Boer

Partner, Data & Artificial Intelligence, PwC Netherlands

Follow us