By Selwyn Moons
The world is digitizing at a rapid pace. This presents huge opportunities but also new challenges. About five hundred hours of video content are uploaded to YouTube every minute, which equals approximately 30,000 hours of newly uploaded content per hour. This shows the immense magnitude of the online domain. Although some platforms curate their own content, we need to realise that part of the online content is plain harmful or undesirable for other reasons. Think of racist content or content that incites to hatred and violence, but also content that misleads consumers or propagates an unhealthy lifestyle.
For instance, I want my children to eat healthy food, so what if they are watching hamburger eating contests online in which the brand name of a hamburger chain is shown continuously? Although videos of hamburger eating contests are legal, I still want to protect my children from clandestine advertising, because healthy food choices are important to me. Fortunately, national regulatory bodies in the field of audio-visual services have a crucial role to play when it comes to protecting children and other consumers.
For decades, supervisors in the public domain were used to oversee and intervene afterwards. This was mainly accepted because of the relatively slow speed of society. However, in a fast digitizing world, the actions of a supervisor need to be far more accurate in order to stay relevant to society. To stay with the example of audio-visual services, regulatory bodies across Europe now face the recent Audiovisual Media Services Directive (AVMSD) of the European Commission. Media regulatory bodies will be responsible for monitoring the gigantic online domain in order to protect children and consumers, and combat racial and religious hatred. This means an enormous task awaits the regulatory bodies involved. Traditionally, staff perform this monitoring role by looking at visual material, and collecting and forwarding data manually. They often do this as a response to a tip from an authority, competitor or citizen.
Yet, this approach is no longer feasible, as most regulatory bodies have limited manpower and financial resources in proportion to the enormous amount of online visual material they (will) have to monitor. To build more trust in society it is crucial to strengthen regulators by using technology in a smart way. Together with my team, I had a dialogue with media regulators from different European countries in order to learn about their priorities, and understand their challenges and risks.
In this context, we looked at technical solutions to help them perform their tasks and increase their efficiency and impact. We developed a smart filter that resulted in an AI-based digital tool that combines a simultaneous check on moving and still images, as well as sound, including spoken language. Combining these three components makes it possible to detect potentially harmful or otherwise unwanted content.
Example: AI detection of unhealthy food by children
The use of AI has recently received negative publicity because of the risk that AI might produce undesired biases if the AI algorithm is not corrected timely (and manually). Responsible design and use of AI should be high on the agenda. Simultaneously we should however further explore its potential contribution to society. Through our contacts with the police, we were introduced to EOKM, the Dutch agency that fights online child abuse. For EOKM we are now using AI technology to build a filter, which simultaneously checks sound and images, both moving and still, in order to trace online child abuse. The tool classifies the material found in five categories with different levels of harmfulness. This makes it easier for EOKM staff to go through the enormous amount of online material in a more targeted manner by focusing on material with a high risk of containing inadmissible content.
In theory, it would be possible to trust the model and omit a manual check. But because there is also a lot of harmless material, and labeling material as child pornography can have serious consequences, a fully automated model is not yet an option. Right now, our AI tool supports productivity and an increase in output of material to take offline.
The beauty of AI technology is that it provides regulators with a weapon to fight unwanted online content faster and more efficiently, despite limited resources. Scraping the entire web to look for all unwanted content is far too expensive, as it requires massive computing capacity. Yet, focusing on well-defined areas is feasible using specific software based on AI technology. Although a fully automated model with less manual checks is possible, most legal frameworks do not yet allow this option. Bottom line is that with the help of AI technology, regulators can create more trust by bringing about desired effects in society. This enables regulator to use the scare human resources more efficiently.