The silicon Prometheus

| July 2, 2024

Few enterprises have managed to so thoroughly embody humanity’s capacity for both absolute folly and outstanding achievement in recent years as the tech industry’s relentless pursuit of artificial intelligence. Like Prometheus himself, we find ourselves on the cusp of imbuing our silicon creations with the spark of intelligence, a feat that promises to be either our crowning glory or our final, fatal hubris.

The potential benefits of AI, we are told by the techno-utopians and Silicon Valley prophets, are nothing short of miraculous. Diseases will be cured, poverty eradicated, and the mysteries of the universe unraveled, all at the behest of our benevolent digital overlords. It’s a vision that would make even the most ardent believers of the Rapture blush with envy.

And indeed, one would have to be willfully obtuse to deny the transformative potential of AI. Already, machine learning algorithms are diagnosing diseases with greater accuracy than human physicians, optimizing energy grids, and even composing passable (if somewhat soulless) music – much to the annoyance of real musicians and record companies whose work was quietly shoved through the AI sausage machine without warning, permission or recompense.

The promise of AI is not just in its ability to perform tasks faster or more efficiently than humans, but in its potential to see patterns and connections that our meat-based 22 watt brains, evolved for a world of savannas and predators, simply cannot fathom. Unlike previous tools, from the plough to the processor, which increased human abilities under human command, AI offers the opportunity to create self-directing ‘agents’ which can make their own decisions in pursuit of – one would hope – human-set goals.

Yet, as with all Promethean gifts, AI comes with a price. The short, medium and long-term threats posed by artificial intelligence are numerous and potentially existential. There’s the oft-cited specter of AI-generated misinformation, the hollowing of human creative culture and mass unemployment as AI renders vast swathes of human labor obsolete. But this, I would argue, is merely the appetizer in the banquet of potential catastrophes that AI lays before us.

Consider the implications of AI in warfare. We already have autonomous drones capable of selecting and engaging targets without human intervention. It doesn’t take a great leap of imagination to envision a future where wars are fought entirely by machines, with humans reduced to the role of spectators in their own annihilation. The great irony, of course, is that such a development might actually make war more palatable to the public, removing the messy business of flag-draped coffins and grieving widows from the equation.

Then there’s the question of privacy and surveillance. AI-powered facial recognition systems are already turning our cities into panopticons that would make Jeremy Bentham tumescent with envy. Add to this the ability of AI to process and analyze vast amounts of data, and we’re looking at a world where the very concept of privacy becomes as quaint and outdated as the horse-drawn carriage, not just in communist China but the surveillance capitalism of the West.

But perhaps the most insidious threat posed by AI is its potential to erode our very humanity. As we increasingly defer to algorithms for everything from choosing our entertainment to making moral decisions, we risk atrophying the very faculties that make us human. Some futurists advocate the addition of processing power to the human brain to complete, raising the danger that, as AI becomes ever more human, we will turn into cyborgs in our desperate attempt to keep up.

And yet, for all these dire warnings, I find myself unable to fully condemn our AI endeavors. There’s something quintessentially human about the whole enterprise, a reflection of our unquenchable curiosity and our desire to create something greater than ourselves. It’s the same impulse that drove us to split the atom, to unravel the human genome, to reach for the stars.  Every major innovation in the past – including agriculture and the industrial revolution – wrought huge changes in social and economic relations, but in the end proved vast strides in the advance of civilisation.

The challenge, then, is not to halt the march of AI progress – a futile endeavor if ever there was one given the sums of money involved – but to guide it with wisdom and foresight. We must ensure that AI remains a tool for human flourishing, rather than an instrument of our obsolescence. This will require not just technological savvy, but a deep engagement with ethics, philosophy, and the fundamental questions of what it means to be human. The tech companies must pay their taxes if they are to hoover up an ever higher percentage of the economy, more than lip-service must be paid to issues of AI safety and alignment, and the ‘move fast and break things’ ethos must change if the consequences of a mistake could devastate mankind.

The outlook for this is not rosy.  While most humans are intrinsically good as individuals, once we operate as commercial or bureaucratic entities, we tend to place profit over environmental sustainability and moral probity as a whole.  A succession of AI companies – not least Open AI – have launched proclaiming their commitment to responsible research only to switch to rampant commercialisation once its potential became clear. In the end, the development of AI may well prove to be the ultimate test of our species. It will force us to confront our own limitations, to grapple with questions of consciousness and free will, and to redefine our place in a world where we are no longer the sole possessors of intelligence.

As we stand on this precipice, staring into the abyss of an AI-dominated future, we would do well to remember the words of another great promethean figure, J. Robert Oppenheimer: “Now I am become Death, the destroyer of worlds.” Let us hope that our silicon children prove to be more benevolent gods than we have been.

While the large language models driving generative AI already approach human abilities across a range of spheres, they don’t comprehend the physical world, lack persistent memory and can’t plan or reason to any meaningful degree. Though Noam Chomsky would disagree, many cognitive scientists surmise we humans think in terms of ideas and concepts and the effect we have to have, then translate those ideas into language or action.  Auto-regressive LLMs simply – and mindlessly – predict the next word – or pixel or note – on the statistical basis of their training data and have no concept of their answer being right or wrong. While this method can generate a simulcrum of intelligence good enough for many purposes, LLMs are not and may never be truly intelligent.

However, if a new generation of models are given – or spontaneously develop – these capabilities, then a scenario in which humanity passes the torch of intelligence to a successor species based on silicon rather than carbon may not be a matter of science fiction very long, and whether that creates a utopia or dystopia may not be ours to choose.

 

SHARE WITH:

Leave a Comment