Developments surrounding artificial intelligence (AI) are moving very rapidly. Less than two years ago, much of the global population had never heard of the large language models ChatGPT and Bard. Today, these two fairly well-known AI tools – also known as generative AI because they generate text themselves – are widely used for both business and private applications.
‘Ask a few simple questions to those large language models and entire narratives emerge, which is unparalleled,’ says Gerwin Naber, in charge of cybersecurity issues at PwC. ‘Another application is focusing on coding software. Developers can often code better and certainly quicker when using it. At present, several things are coming together. There is enormous computing power and a huge amount of data available, and the technology behind modelling is much more sophisticated than before.’
‘AI is not new. Previously, its application was reserved for specialists who spent a lot of time on data modelling somewhere in complex environments,’ says Mona de Boer to explain the AI developments. She obtained her PhD earlier this year on the AI issue ‘Trustworthy AI and accountability: yes, but how?’. ‘That complexity no longer exists. You can now use an AI tool via your browser without any prior knowledge. That easy accessibility has increased tremendously in a short time. See what you can do with ChatGPT and how chatbots have recently evolved for the better. Despite the risks, I am amazed at how well these models perform. That certainly helped accelerate adoption by many millions of users in an incredibly short period.’
The two explain that large language models are very good predictors of the next word, so to speak. De Boer: ‘A computer is good at reproducing something; acting as a mirror of what is in our heads. Applications like ChatGPT, as well as Google's Bard, have the entire Internet as their data source, but the Internet is what we humans have posted on it. Those applications are actually a very sophisticated mirror of who we are ourselves.’
The message from the two PwC experts is clear: keep thinking for yourself. ‘I don't want to trivialise it,’ Naber explains. ‘We need to think ahead and consider what impact AI will have. What can it best be used for? It is something that can help us, but it does not absolve us from the responsibility of continuing to use our own senses.’ Mona de Boer adds: ‘I think it's a natural reaction to everything that's new in society. As the average citizen also comes into contact with AI on a daily basis, there is now more debate, including about the risks.’
In De Boer's view, almost every organisation – regardless of sector or size – is processing data into information and acting on it. ‘That process takes quite a lot of time and that time can be spent more meaningfully. Just look at how much time customer-service departments spend on handling recurring customer queries, the manufacturing industry on performing standard quality checks on products, not to mention the challenges in healthcare when it comes to both providing care and handling the administrative aspects. By deploying AI, processes can be radically accelerated and performed better and more consistently. Don't think of AI as just a tool to be implemented. It is a technology that helps us look at our work in a fundamentally different way. In organisations, time is then freed up to do something with those insights and that is where the attraction lies. Consequently, I see jobs changing rapidly.’
Gerwin Naber says he does not have a monopoly on wisdom. ‘My initial reaction is that AI is becoming more accessible and therefore easier to use for creating phishing e-mails and fake websites. If, in the future, large language models also run on quantum computers, that will break encryption keys in the blink of an eye. Have no illusions that prevention is the solution. Where are my ‘crown jewels’ and how will I protect them? Just expect to be hacked. So: monitor, detect. If something happens: respond and recover. To make sure that the entire cycle works, you have to start simulating threats as an organisation to test its own responsiveness.’
‘We are going to work more efficiently. I see AI becoming an assistant in many professions, a co-pilot,’ says De Boer. ‘Collaborating with that co-pilot is going to put a firm stamp on how we learn, how we work, how we spend our free time. We will also be seeing much more of the physical co-pilot. In healthcare, for example, you see developments surrounding physical robots doing the daily mental check-in with elderly people.’
Naber also sees AI being a support for daily work practice. ‘With the deployment of AI, the computer has become so smart that it will help us process the disruptive flow of data signals like e-mails, minutes, images, social media, etc. My message to business is: immerse yourself in this development and explore what AI can do for you.’
For seven years, PwC has been bringing together everything relating to AI in what it calls the ‘Responsible AI toolkit’. It is an international collection of knowledge regarding AI legislative developments, as well as views from regulators, consumer organisations, businesses and public authorities. Actually from everyone who plays a role in the AI ecosystem,’ explains PwC partner and AI specialist Mona de Boer. ‘But there is more in the toolkit. It features up-to-date leading practices for specific organisational contexts. In this way, sectors can learn from one another. But it also contains specific tools. For example, tools that help a data scientist in day-to-day reality to make ethical trade-offs when developing an AI model. This is how we help our clients to deploy AI responsibly.’
‘The meaning of responsible AI has been clearly articulated by the European Commission in the draft AI Act,' De Boer continues. ‘To me, it means artificial intelligence is law compliant, consistent in its performance and ethical. Ethical is the most difficult of the three. It relates to what is accepted by clients, patients and citizens within the boundaries of legislation. In other words: AI applications where the risks of a negative impact on the welfare of natural persons are reduced to an acceptable level. A completely risk-free AI application does not exist. It's all about an accepted balance between the benefits an AI application brings to people and the proportionate risks, and being transparent about this as an organisation.'
This article previously appeared on nrc.nl