Mental aberrations

| July 25, 2024

We all are subject to pareidolia; who hasn’t lain on the ground looking up at clouds to see all sorts of things in them  – animals, common objects, fantastic beasts and, yes, faces?

Faces are incredibly common outputs of pareidolia. We see them everywhere, including in wood grain patterns, foods, and other everyday ephemera.

Our brain is wired to see faces, which isn’t too surprising, seeing as how this is the main way we recognise other people. But it has the unintended consequence of forcing us to see faces when they aren’t really there. The simplest example of this is a smiley face: it’s two dots over a curved line, just about as simple a geometric construction as there can be, yet you cannot not see it as a smiling face. (Incidentally, we’ve seen smiley faces on Mars, too.)

Humans are interacting more than ever with artificial intelligence (AI) – from the development of the first “social robots” (a robot with a physical body programmed to interact and engage with humans) like Kismet in the 1990s to smart speakers such as Amazon’s Alexa. But this technology is changing how humans relate with it – and with each other.

New research looked at how humans experience interacting with AI social robots or digital avatars – AI virtual chatbots designed to look and interact like a human on a device. These are designed to increase human interaction with them.

Ever since the telescope was invented, pareidolia has ruled the way we’ve named what we saw. The most iconic example of them all is the Horsehead nebula. Aptly named, it looks like a cosmic chess piece seen in profile, stoically waiting for its next move. The Horsehead is located in the constellation Orion and is part of the immense Orion molecular cloud complex, a sprawling collection of cold, dense gas and dust. Such clouds are star-formation factories, spawning suns that light up the material all around them. The Horsehead is an extension of dark dust silhouetted against a bank of ruddily glowing hydrogen, lit up by the massive star Sigma Orionis not too far above the horse’s “head.”

The “Man in the Moon,” however, is a poser. Various explanations have been advanced for people who see a face in the moon’s chaotic, giant-impact-sculpted mix of bright highlands and dark plains, but none are convincing. Still, around first quarter, a pair of letters appears on the moon: the Lunar X and V, shapes created by light and shadow as the sun rises over a pair of craters, illuminating their raised rims. Several other pareidolic features can be seen as well; People have thought that the huge impact crater Clavius looks like a cartoonishly surprised face.

AI virtual chatbots designed to look and interact like a human on a device are designed to increase human interaction with them. Social robots such as ElliQ and Pepper are popular in Europe, Japan and the United States, particularly as aids for the elderly. New Zealand has been slower to adopt these technologies.

Since the pandemic, social robots and digital avatars have been used to address issues such loneliness and mental health issues. In one Scottish experiment during the pandemic, people were introduced to social robot “Pepper” over regular video chats. The researchers found the interactions lifted the mood of the participants. Given the uncertainties around the long-term usage of these types of technologies, researchers and policymakers have a responsibility to question how these will affect humans, individually and in wider society.

While acknowledging the benefits of AI social technologies such as addressing loneliness and health issues, it is important to understand the broader implications of their use.

The COVID-19 pandemic showed how easily people were able to shift from in-person interactions to online communications. It is easy to imagine how this might change further, for example where humans become more comfortable developing relationships with AI social technology. There are already cases of people seeking romantic relationships with digital avatars.

The tendency of people to forget they are engaging with AI social technologies, and feeling as if they are “real”, raises concerns around unsustainable or unhealthy attachments. As AI becomes more entrenched in daily life, international organisations are acknowledging the need for guardrails to guide the development and uses of AI. It is clear governmental and regulatory bodies need to understand and respond to the implications of AI social technologies for society.

The European Commission’s recently passed AI Act offers a way forward for other governments. The AI act provides clear regulations and obligations regarding specific uses of AI.

It is important to recognise the unique characteristics of human relationships as something that should be protected. At the same time, we need to examine the probable impact of AI on how we engage and interact with others. By asking these questions we can better navigate the unknown.

Is any wonder that people are still reporting sightings of the Bunyip, Tasmanian tiger, Bigfoot, the Yeti, aliens and the Loch Ness Monster? If nothing else, they make life more interesting.

SHARE WITH: