Should we work with AI – or kill it?

| August 7, 2019

While the data processing power of computers has come a long way in the last 50 years, the design of the computer used in NASA’s Apollo 11 remains extraordinary.

The computing technology used in NASA’s Apollo 11 mission remains extraordinary but was primitive by modern standards.

But even at the time of the moon mission, people were uneasy about the growing power of computers and the potential of artificial intelligence (AI).

In Stanley Kubrick’s 1968 film classic 2001: A Space Odyssey, the ship’s computer, HAL (short for Heuristically Programmed ALgorithmic Computer) is an omniscient, speaking, thinking, lip-reading collaborator in at least the first part of the film.

It made for unnerving watching then and now.

More than fifty years later, we take our (sometimes speaking) personal computers and smart phones for granted.

But the debate about the impact artificial intelligence does and could have in modern society as well as in our work places, continues.

Questions such as “Will it take our jobs? Could it take our humanity?” are not outrageous and increasingly commonplace.

Recently, at the Byron Bay area music festival Splendour in the Grass, I joined fellow University of Melbourne scholars Professor Uwe Aickelin and Dr Suelette Dreyfus, along with Australia’s national science icon Dr Karl to ponder some of these questions.

Part of the discussion involved another fellow researcher, digital media specialist Dr Niels Wouters, who developed the deliberately flawed Biometric Mirror which aims to highlight some of the problems of our trust in the algorithms behind AI.

Already the impacts of AI and digital technologies in law and legal education – are widespread, with digital tools used to design apps promoting access to justice, review documents as well as draft and provide basic advice about contracts.

New digital platforms and advances in AI will have a negative impact on some workers. 

But, more generally, what does AI mean for the next generation of professionals? Is the ‘Black Mirror’ future an earphone wearing, hot-desking set of disconnected transient workers performing mundane tasks overseen by an algorithm?

The panel agreed that new digital platforms and AI advances are already here – and are likely to continue to have a negative impact on some workers and to replace some forms of routine tasks.

Nonetheless, we still have the opportunity for meaningful work, and workplace interactions for humans.

The widespread view among technologists and other people working in AI, is that it won’t be able to replicate, at least for some time, human talents such as creativity, empathy and specialist problem solving.

Professor Uwe Aickelin makes the point that algorithms do not yet compose meaningful music, but instead, a rather lacklustre approximation of what talented musicians can do in the real world.

Moreover, it’s possible that technological advances could actually prompt humans to seek out greater levels of interaction and connection with each other.

Those all-important skills of creativity, empathy and problem solving develop best when we are working closely with other people. We simply can’t build empathy on our own.

Working in public spaces can provide a more social alternative to sitting alone and working.

From Apollo 11 onward, technological advances happen as a result of teams of thinkers and problem solvers. And the continued popularity of music festivals tells us something about the sheer joy humans feel when in celebrating such human creativity en masse.

A recent study from the Norwegian University of Science and Technology investigated the increasing trend of people choosing to set up a working space with their phones and laptops in coffee shops.

Of course, part of the appeal is the endless coffee and free WIFI, but a cafe can provide a more social alternative to sitting alone and working.

So, working trends could in fact become not more isolated but more collaborative.

People may choose to respond to increasingly technological workplaces by seeking out genuine, sustained, rather than transient, co-working opportunities with other people who complement their own suite of creative problem-solving skills.

But these more optimistic visions of the future work place are also premised on education, and the right kinds of education.

Professor Uwe Aickelin also thinks that, while not everyone needs to be a computer programmer, increasingly we will all need basic maths skills and an understanding of how AI and other digital technologies work.

As a legal educator, I would agree.

I would add that in the face of rapid technological change we need to continue conversations about the potential of technology and the values that are important to us in responding to it.

Not everyone needs to be a computer programmer, but we all need basic maths skills and an understanding of how AI works. 

But never discount people power.

The recent decision by Google to stop allowing widely criticised ticket reseller Viagogo to pay to headline its ads shows that consumer sentiment and regulator outcry can have some influence on even the tech giants.

Perhaps active conversations about the kind of work, and indeed life, we want with AI will help mankind avoid the final confrontation between human and computer posited by Kubrik in 2001: A Space Odyssey – how to turn off the machines.

Professor Jeannie Paterson will be presenting at The Digital Citizens Conference at Melbourne Law School on the 24-26 July. The conference brings together computer scientists, engineers, policy makers and lawyers to talk about these kinds of questions of rights and responsibilities in responding to AI and new digital technologies. For more information, visit the websiteThis article was published by Pursuit.

SHARE WITH: