The newly introduced AI Act sets harmonised rules for (i) the development, (ii) placement on the market and (iii) use of AI systems in the European Union, following a risk-based approach. These rules will gradually come into effect over the next few years, with the first provisions applying from 2 February 2025. One of these first provisions is a legal requirement for AI literacy, set out in Article 4 of the AI Act.
Article 4 of the AI Act requires that 'providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used'. In fulfilling this requirement, ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons—taking into account their respective rights and obligations in the context of the AI Act—to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (Article 3 sub 56).
In short, the AI literacy requirement under Article 4 may be summarised as follows:
To accomplish... |
... the following AI literacy target audience... |
... should be equipped with the following necessary notions... |
.... in order to... |
... taking into account... |
---|---|---|---|---|
the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control |
|
|
|
|
The provisions of Article 4 are applicable to the following actors in the AI value chain:
With the recent uptake of enterprise (Generative) AI, it is safe to assume that the average organisation is more likely than not to fall under the provisions of Article 4 of the AI Act and will need to comply with this general provision by 2 February 2025. This requirement, however, poses the challenge to providers and deployers of AI systems to shape and achieve the AI literacy requirement, while being demonstrably accountable for the effort put into that trajectory (Article 4, 'to their best extent') and the outcome thereof (Article 4, 'ensure a sufficient level of AI literacy'). Currently, the AI Act provides no further guidance on the nature and depth of the skills, knowledge and understanding to be achieved through AI literacy programmes, nor on what constitutes a ‘sufficient level’ of AI literacy. The expectation is that these matters are and will remain 'in motion' for the time being. Nevertheless, organisations must design and implement their AI literacy programmes and, as applicable, comply with regulatory requirements and timelines. To support the aforementioned process, the next part of this article shares initial insights into AI literacy, based on emerging developments in practice and in the theory around the topic.
The definition of ‘AI literacy’ in the AI Act (Article 3 sub 56) refers to the need to develop three layers of competency, aimed at the informed use of AI systems: (1) knowledge, (2) understanding and (3) skills. In short this means:
All three layers of competency should be addressed as part of the learning objectives of organisational AI literacy programmes.
Next, the question is what topics are relevant substantive building blocks to achieve informed use of AI systems. Although the theory and practice of modern AI literacy is currently still in its infancy, a number of topics may be highlighted as top of mind at this point in time. The table below illustrates these topics (non-exhaustive):
|
AI literacy topics |
Knowledge |
Understanding |
Skills |
---|---|---|---|---|
1 |
Conceptual understanding of AI |
|
|
|
a |
|
X |
X |
|
2 |
Understanding of AI in business context |
|
|
|
a |
Organisational relevance, opportunities, value drivers and use cases for (Generative) AI |
X |
X |
|
b |
Organisational or function-specific (Generative) AI technology stack and user environment |
X |
X |
|
3 |
Understanding of AI risk and Trustworthy AI |
|
|
|
a |
AI-regulation developments and the EU AI Act:
|
X |
X |
|
b |
Types of organisational and societal risks associated with (Generative) AI:
|
X |
X |
|
c |
AI risk management and mitigation strategies:
|
X |
X |
|
d |
(Generative) AI risks at the application level
|
X |
X |
|
e |
Impact of AI use on affected persons (citizens, customers, patients, employees etc.):
|
X |
X |
|
4 |
Practical understanding of (Trustworthy) AI |
|
|
|
a |
Prompt engineering for Generative AI tools
|
X |
X |
X |
b |
Contextualising organisational use of Generative AI models: Retrieval Augmented Generation |
X |
X
|
X |
c |
Recognising (Generative) AI-generated output/content |
X |
X |
X |
d |
Applying Trustworthy AI in real-life situations, e.g.:
|
X |
X |
X |
e |
Applying human oversight to (high-risk) AI systems:
as well as the necessity of peer review (e.g. four-eyes principle in human oversight).
|
X |
X |
X |
As previously mentioned, this overview is not exhaustive and not (yet) tailored to specific sub-target groups of the organisation (e.g. (non-)executives, business, data scientists and engineers, legal and compliance functions, risk functions, human resources, internal audit, etc.). Other topics that might be relevant to cover, depending on the specific circumstances and sub-target groups, are e.g. AI governance (establishing internal AI governance structures, roles and responsibilities in AI governance), AI accountability (mechanisms to ensure accountability in AI development and use, reporting and auditing AI systems) and human-centric AI design (designing AI systems with user needs in mind).
Early AI literacy practices show that some topics are appropriate to deliver across the organisation (e.g., conceptual understanding of AI), while some are most impactful when tailored to sub-audiences within the organisation. The latter applies to modules that are more sensitive to specific (Generative) AI applications and tools or where different personas in the organisation may benefit in a different way from general purpose AI applications and tools. These considerations are important in shaping organisational AI literacy programmes that meet the AI Act's requirement to take into account the context in which the AI systems will be used.
An effective AI literacy programme is essential to achieve an equipped workforce in order to execute the (future) AI strategy, to use AI responsibly, and to shape, implement and comply with new legal requirements. Such a workforce is a prerequisite and an accelerator for realising true transformational value from AI.
It is also undoubtedly realistic that AI literacy will be a continuous learning journey for years to come. This means that an AI literacy plan and programme, developed and delivered in the short term, must also include a vision for continuous learning (keeping abreast of advancements in AI, forthcoming regulatory implementing acts, and societal ‘algoprudence’) and proactively facilitate that process with resources.
Mona de Boer
Partner, Data & Artificial Intelligence, PwC Netherlands
Tel: +31 (0)61 088 18 59