New regulatory requirement effective 2 February 2025

AI literacy under the EU AI Act

AI literacy under the EU AI Act
  • Publication
  • 08 Jan 2025

On 1 August 2024, the European Artificial Intelligence Act (AI Act) entered into force. The AI Act aims to ensure that AI developed and used in the European Union's internal market is trustworthy (i.e. lawful, ethical and robust), with safeguards to protect people's health, safety and fundamental rights. This article sheds some light on one of the AI Act's first requirements that providers and users of AI systems in the EU market must meet: AI literacy.

By: Mona de Boer (Digital Trust, Data & Artificial Intelligence)

The EU AI Act introduces a regulatory requirement for AI literacy

The newly introduced AI Act sets harmonised rules for (i) the development, (ii) placement on the market and (iii) use of AI systems in the European Union, following a risk-based approach. These rules will gradually come into effect over the next few years, with the first provisions applying from 2 February 2025. One of these first provisions is a legal requirement for AI literacy, set out in Article 4 of the AI Act.

Organisations that provide or use AI systems must shape and achieve AI literacy

Article 4 of the AI Act requires that 'providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used'. In fulfilling this requirement, ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons—taking into account their respective rights and obligations in the context of the AI Act—to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (Article 3 sub 56).

In short, the AI literacy requirement under Article 4 may be summarised as follows:

To accomplish...

... the following AI literacy target audience...

... should be equipped with the following necessary notions...

.... in order to...

... taking into account...

the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control

  • providers
  • deployers
  • skills
  • knowledge
  • understanding
  • gain awareness about the opportunities and risks of AI and possible harm it can cause
  • make an informed deployment of AI systems
  • the AI literacy target audience's technical knowledge, experience, education and training
  • the context the AI systems are to be used in
  • the persons or groups of persons on whom the AI systems are to be used.

The provisions of Article 4 are applicable to the following actors in the AI value chain:

  • providers of AI systems, i.e. natural or legal persons, public authorities, agencies or other bodies that develop an AI system or a general-purpose AI model or that have an AI system or a general-purpose AI model developed and place it on the market or put the AI system into service under their own name or trademark, whether for payment or free of charge (Article 3, sub 3);
  • deployers of AI systems, i.e. natural or legal persons, public authorities, agencies or other bodies using an AI system under their authority except where the AI system is used in the course of a personal non-professional activity (Article 3, sub 4).

With the recent uptake of enterprise (Generative) AI, it is safe to assume that the average organisation is more likely than not to fall under the provisions of Article 4 of the AI Act and will need to comply with this general provision by 2 February 2025. This requirement, however, poses the challenge to providers and deployers of AI systems to shape and achieve the AI literacy requirement, while being demonstrably accountable for the effort put into that trajectory (Article 4, 'to their best extent') and the outcome thereof (Article 4, 'ensure a sufficient level of AI literacy'). Currently, the AI Act provides no further guidance on the nature and depth of the skills, knowledge and understanding to be achieved through AI literacy programmes, nor on what constitutes a ‘sufficient level’ of AI literacy. The expectation is that these matters are and will remain 'in motion' for the time being. Nevertheless, organisations must design and implement their AI literacy programmes and, as applicable, comply with regulatory requirements and timelines. To support the aforementioned process, the next part of this article shares initial insights into AI literacy, based on emerging developments in practice and in the theory around the topic.

Topics and competency levels to consider when shaping organisational AI literacy programmes

The definition of ‘AI literacy’ in the AI Act (Article 3 sub 56) refers to the need to develop three layers of competency, aimed at the informed use of AI systems: (1) knowledge, (2) understanding and (3) skills. In short this means:

  1. learning factual information about AI concepts and practices through study or experience (knowledge);
  2. grasping the ‘why’ behind that factual information to be able to apply it to previously unseen and non-standard situations and contexts (understanding); and
  3. repeatedly applying that factual information in everyday practice (skills).

All three layers of competency should be addressed as part of the learning objectives of organisational AI literacy programmes.

Next, the question is what topics are relevant substantive building blocks to achieve informed use of AI systems. Although the theory and practice of modern AI literacy is currently still in its infancy, a number of topics may be highlighted as top of mind at this point in time. The table below illustrates these topics (non-exhaustive):

 

AI literacy topics

Knowledge

Understanding

Skills

1

Conceptual understanding of AI

 

 

 

a

  • Introduction to AI algorithms and models
  • Definition of (Generative) AI
  • Fundamental (Generative) AI principles, concepts and techniques
  • Basics of AI development, deployment and maintenance
  • Real-world business applications of (Generative) AI

X

X

 

2

Understanding of AI in business context

 

 

 

a

Organisational relevance, opportunities, value drivers and use cases for (Generative) AI

X

X

 

b

Organisational or function-specific (Generative) AI technology stack and user environment

X

X

 

3

Understanding of AI risk and Trustworthy AI

 

 

 

a

AI-regulation developments and the EU AI Act:
  • Objectives and scope of the AI Act
  • Key provisions and timelines
  • Rights and obligations of providers, deployers and affected persons
  • Compliance requirements and consequences of non-compliance

X

X

 

b

Types of organisational and societal risks associated with (Generative) AI:
  • Lawfulness (e.g. breach of data privacy, infringement of intellectual property, lack of explainability)
  • Ethics (e.g. unfairness, systemic bias, misinformation and abuse)
  • (Technical) robustness (e.g. invalid data, AI model inaccuracy, cybersecurity)
  • Environmental/sustainability (e.g. energy and water consumption of data centers)

X

X

 

c

AI risk management and mitigation strategies:
  • Identifying (Generative) AI risks
  • Applying risk assessment methodologies to identified (Generative) AI risks
  • Applying organisational risk appetite to assessed risks
  • Designing and implementing measures and techniques to mitigate (Generative) AI risks in accordance with organisation risk appetite

X

X

 

d

(Generative) AI risks at the application level

  • E.g. AI hallucinations, unauthorised access to systems and data, data privacy risks, information security risks, human over-/underreliance on AI systems

X

X

 

e

Impact of AI use on affected persons (citizens, customers, patients, employees etc.):
  • Understanding how decisions taken with the assistance of (Generative) AI have an impact on (different groups of) affected persons
  • Applying strategies to minimise harm and maximise benefits in accordance with external and internal requirements and policies

X

X

 

4

Practical understanding of (Trustworthy) AI

 

 

 

a

Prompt engineering for Generative AI tools
  • Involves designing effective prompts or instructions to direct the behaviour of a Generative AI model
  • Crafting prompts that produce desired outputs while avoiding unintended biases or undesirable outcomes

X

X

X

b

Contextualising organisational use of Generative AI models: Retrieval Augmented Generation

X

X

 

X

c

Recognising (Generative) AI-generated output/content

X

X

X

d

Applying Trustworthy AI in real-life situations, e.g.:
  • Case studies of ethical dilemmas in AI
  • Making trade-offs between Trustworthy AI design principles such as data privacy, AI model accuracy, AI system transparency
  • Performing algorithmic impact assessments, fundamental rights impact assessments (FRIAs)

X

X

X

e

Applying human oversight to (high-risk) AI systems:

  • Understanding the rationale and intended purpose of an AI system
  • Comprehending the strengths and limitations of an AI system
  • Understanding forms of human oversight and their suitability in relation to specific AI system applications:
    • Human-in-the-loop
    • Human-on-the-loop
    • Human-in-control

as well as the necessity of peer review (e.g. four-eyes principle in human oversight).

  • Making effective use of an AI system's transparency information in order to appropriately interpret, critically evaluate and effectively use the system's output
  • Understanding how to address uncertainties or misalignment after evaluating AI system output (including not using or otherwise disregarding, overriding or reversing the output of the AI system, or intervening/interrupting system operation to allow it to come to a halt in a safe state)
  • Monitoring the operation of an AI system on the basis of its instructions of use

X

X

X

As previously mentioned, this overview is not exhaustive and not (yet) tailored to specific sub-target groups of the organisation (e.g. (non-)executives, business, data scientists and engineers, legal and compliance functions, risk functions, human resources, internal audit, etc.). Other topics that might be relevant to cover, depending on the specific circumstances and sub-target groups, are e.g. AI governance (establishing internal AI governance structures, roles and responsibilities in AI governance), AI accountability (mechanisms to ensure accountability in AI development and use, reporting and auditing AI systems) and human-centric AI design (designing AI systems with user needs in mind).

Early AI literacy practices show that some topics are appropriate to deliver across the organisation (e.g., conceptual understanding of AI), while some are most impactful when tailored to sub-audiences within the organisation. The latter applies to modules that are more sensitive to specific (Generative) AI applications and tools or where different personas in the organisation may benefit in a different way from general purpose AI applications and tools. These considerations are important in shaping organisational AI literacy programmes that meet the AI Act's requirement to take into account the context in which the AI systems will be used.

AI literacy is a journey, not a destination

An effective AI literacy programme is essential to achieve an equipped workforce in order to execute the (future) AI strategy, to use AI responsibly, and to shape, implement and comply with new legal requirements. Such a workforce is a prerequisite and an accelerator for realising true transformational value from AI.

It is also undoubtedly realistic that AI literacy will be a continuous learning journey for years to come. This means that an AI literacy plan and programme, developed and delivered in the short term, must also include a vision for continuous learning (keeping abreast of advancements in AI, forthcoming regulatory implementing acts, and societal ‘algoprudence’) and proactively facilitate that process with resources.

Contact us

Mona de Boer

Mona de Boer

Partner, Data & Artificial Intelligence, PwC Netherlands

Tel: +31 (0)61 088 18 59

Follow us