The experimental phase of artificial intelligence is over, but how do you now arrive at a strategic vision to accelerate responsibly with AI? PwC expert Mona de Boer takes you on that journey.
Imagine packing a suitcase and setting off on a journey. Destination or direction of travel: yet unknown or, at best, fuzzy. Requirements for during the journey or upon arrival at destination: not much clearer than that. In a sense, the adventurous nature of an unorchestrated journey has its charm and getting into motion is a great way to get inspired about places to go, but there comes a point where clarity on a destination becomes necessary to justify the effort of the journey and ensure it is followed through. Especially, when the necessity of the journey is not up for discussion.
For most organisations, the situation described here best reflects their status quo when it comes to AI. They have started exploring, piloting, or even early-stage implementing AI solutions within their business context, without generally having a defined strategy for it. Often, the early AI use cases that were explored, built, and piloted were focused on quick wins, i.e. applications of AI with an immediate tangible business impact, achieved with limited investment, effort and risk. These quick wins have helped the organisation explore recent AI developments and technology offerings and their potential for daily professional practice and use. The journey has – in general – been so far, so good.
The next generation of AI use cases in a typical AI journey are the so-called ‘high-impact use cases’. These use cases bring a higher business impact than the quick wins, against proportionally more effort or investment in, for example, AI technology and infrastructure, data quality and governance, and workforce literacy and skills. These use cases also tend to expose the organisation to more or higher risks, due to their nature (they tend to be less ‘administrative’ and more ‘transactional’ focused) and because they are more likely to sit in core business domains than in supporting domains.
A successful AI business journey requires identifying these high-impact use cases in order to create clarity around the larger transformational value of AI to the organisation. However, this – in itself – does not constitute the ‘destination’ of the AI business journey. To get that clear, the organisation needs to develop a view on what business risks and efforts it is willing to match with the medium- and long-term business value identified.
of organisations that invest in Responsible AI have better customer experience
So, how to go about such a strategic view to accelerate responsibly with AI? In addition to forming a clear view on the business value seeked from AI, organisations can greatly benefit from the momentum that recently introduced AI regulation, such as the EU AI Act, has created. Although the operationalisation of the detailed scope, obligations and requirements of the regulation is currently every bit as a journey for organisations, it provides a societal, sector-overarching, business-focused view on AI risk appetite. Organisations can use that as a tangible point of reference to develop their own AI risk appetite, as well as the governance to support that appetite, the day-to-day processes to run the business in line with that appetite and the allocation of roles & responsibilities in the organisation to do so.
Organisations need to cultivate trust in AI. Employees need to trust that AI systems will do what they promise, while customers need to trust that AI is being deployed ethically and transparently. PwC research shows that organisations that invest in Responsible AI not only better manage risk, but also experience direct benefits:
The EU AI Act and its forthcoming implementing acts, but also the accompanying standards being developed by international standard setters, will provide organisations with guidance on organisational and technical measures to select, contextualize and implement in order to operationalize responsible AI in their day-to-day practice.
Leveraging the approaches and content provided by AI regulation, among others, is useful to ultimately achieve a comprehensive vision on how to accelerate responsibly with AI. However, a common language, goal alignment and (ongoing) collaboration between key organisational stakeholders is indispensable. Current developments in enterprise AI practice sometimes reveal that organisational functions focused on creating and protecting business value may have grown somewhat distant over time and somewhat hyper focused on ‘their’ side of organisational risk appetite.
A pivotal development, as modern AI turns out to be, requires that business value and risk alignment go hand in hand (again). This means that a successful ‘destination’ for an AI business journey can only be determined with the relevant organisational stakeholders at one and the same table from the start.