A short excursion to the limits of perception and reality

With a deep-seated desire to believe in transformation, our human species is virtually predestined for the uncritical use or rejection of AI. In his example, our Chief Innovation Officer Rob van Kranenburg shows how permeable we are to the simplest illusionist tricks. Nothing against teachers turning into Father Christmases for the delight of children! But beware of AIs that morph too much into application-friendly mechanical Turks for the benefit of their providers!

In a video from the nineties, we see a teacher in a preschool class slowly transforming into Sinterklaas – Saint Nicholas. First she puts on a white blouse, then she puts on purple trousers over her jeans, she puts on a cape and puts a mitre on her head. The children watch and enjoy this theatre. Only when she puts on her beard do the children jump up and shout: Saint Nicholas, Saint Nicholas! The teacher has disappeared as if by magic and none of the children wonder where she is. Saint Nicholas is here!

A situation like a disguise, a purchase or financial transaction in self-service can remain stable and normal for a very long time. So long, that is, until an element repeatedly appears that takes reality to a new level. This is exactly what happens with paradigm shifts. Precisely because we as humans are wired to keep adding confirming evidence to whatever situation we know to be true, we wish for serious change or transformation to be accompanied by magic. When something changes radically, when things behave differently all of a sudden, and we ask what happened, we need a soothing element.

This process is currently playing out in the field of AI. That is why I would like to explain the Father Christmas element of AI today. Basically, with the recent quantum leap in the functionality of AI, nothing has happened, but something has emerged, a new element. As we all know, data is the brain power for AI algorithms. Deep learning and machine learning, but also millions of cheap data workers who moderate content and label training data, train the datasets to recognise all kinds of patterns, including human ones. To this end, the AI machinery breaks everything down into a set of properties:

A car is a set of properties.

A park is a set of properties.

A room is a set of properties.

A bicycle is a set of properties.

A person is a set of properties.

A traffic accident is a set of properties.

A cat is a set of properties.

For us humans, it is quite clear what a car is, namely a certain object in the world with which one can drive from A to B. To understand that something is in the world, you have to be an embodied being that encounters an object. And one must intuitively feel that there is such a space in which a world can appear by means of its objects. This is admittedly a very complex perspective. It sounds very complex and serious, but we do it all the time as if it were nothing. At the same time, what we take for granted is the highest form of (self-)consciousness.

But what does the car look like in a flat world where it has been broken down into a set of digital properties? Imagine there are three people in the car, a father and his two daughters. In the car, the father sits at the wheel while his two daughters sit in the back seat. The father is calm and composed, his hands firmly gripping the steering wheel as he carefully steers the car through the crowded streets. The two daughters, on the other hand, are full of energy and excitement, their heads turning from side to side as they take in the sights and sounds of the world outside the window. They chat happily about their day, their laughter echoing through the car as their father smiles and listens to their tales.

Let us now imagine that an AI is asked to group all properties with the quality „green“. In our case, these are the colour of the car, the green backpack on the back seat, the older daughter’s blouse, the younger one’s jacket, the father’s shoes and the keychain pendant dangling from the car key. Now let’s assume that this property is prioritised by the AI above all others. Then we have a situation in which the property „green“ is suddenly more important than the property „alive“. And we need a spell to understand why an AI-controlled risk assessment ranks the property „alive“ lower than „green“ and unabashedly lets the car drive over an unsecured level crossing at the exact moment when the goods train fuelled with green hydrogen crosses it.

Today, we still refer to the Turing test in Chat GPT 4 to decide whether the AI has a certain level of language understanding. But in the meantime, we need a new test. Let’s call it the Blush Test: Can AI blush? Only then can we speak of a knowledge of a vulnerable self or a self that shrinks from hurting others. For without shame there are no human things. To blush or not to blush, that is the Father Christmas moment of AI. And p.s. The paragraph about the family in the car was written by Open AI, but that’s actually irrelevant.

Confusing? Never mind. No AI understanding has fallen from the sky yet. Take your time with it. But take it. Because if you don’t, the AI will take it from you.