Deep Thought – Artificial Intelligence
Many years ago I was on the computer engineering team at Honeywell in Melbourne. One day we received a call from one of our large customers: the computer had made a mistake!
At first we laughed this off – GIGO (garbage in, garbage out) was a well known response. However, when we had a closer look at the machine in question, it really looked as if the claim was true.
We asked for the engineering plans only to be told they were not available. Apparently this high tech machine had been designed and built in Texas by another computer and noone was quite sure how it was constructed. After some weeks of work we did finally find and overcome the problem.
Some years later one of the English universities built a computer to learn for itself. They gave it a problem to design and write a program to emulate Microsoft Word. It succeeded in that task, but when the programmers looked at the program generated, they found it impossible to follow the logic. Indeed, some of the coding appeared to be redundant – it didn’t appear to achieve anything and came to a dead end. However, when that coding was removed, the whole program ceased to function.
These two examples show that machine logic is alien to human logic – not incorrect, merely completely different. Today, computers control most of our lives and activities; from delivering power to our homes to controlling traffic, freight, communications, aircraft operations and the many devices we have around the home. We rely almost completely on them.
Google has just announced a computer which taught itself the game of Go and beat the previous computer Go champion 100 games to zero. This is a major advance and they are to be commended for being able to achieve this level of technology. However, no-one knows exactly how the program works or the logic behind it.
When we are dealing with games, this doesn’t matter much. However, if we look only a little to the future, if anyone using this technique enabled a computer to program itself, to control some aspect of human activity, we could never be sure that it would place an emphasis on human safety. As opposed to a more logical concept like shutting down the system to save some of its components, or diverting goods needed for an emergency elsewhere, for quite valid reasons.
We can never assume that a machine has the safety and comfort of humanity as its primary goal no matter what the so-called three laws of robotics say.
Alan Stevenson spent four years in the Royal Australian Navy; four years at a seminary in Brisbane and the rest of his life in computers as an operator, programmer and systems analyst. His interests include popular science, travel, philosophy and writing for Open Forum.