‘Alan Turing: His Work and Impact’ wins award for academic publishing

Prof Mark Bishop has won a top award for academic publishing for his contribution to Alan Turing: his Work and Impact, which won theĀ R.R. Hawkins award at the 2013 Prose Awards.

Extract:
In popular culture, the great English polymath Alan Turing is perhaps best remembered for his work on the BOMBE, the giant electro-mechanical devices that were used for Ultra secret intelligence work carried out at Bletchley Park in World War II. This work would help break the German Enigma machine’s encrypted war-time signals; work so valuable it subsequently led Churchill to reflect that “it was thanks to Ultra that we won the war”.

In my area of research – Artificial Intelligence (A.I.) – Turing is better known for the seminal reflections on machine intelligence outlined in his 1950 paper Computing Machinery and Intelligence.

This paper focussed on the core philosophical question: “can a machine think?” This is a question which, in its literal form, Turing famously described as being “too meaningless to deserve discussion”.

Instead, Turing replaced it with the more objective, testable proposition that a machine can be made to play the ‘imitation game’ (an imagined Victorian style parlour-game) at least as well as the ‘average’ human; a procedure now known as Turing’s test (for machine intelligence).

In the initial exposition of the game that has become known as the ‘Turing Test’, Turing called for a human interrogator (C) to hold a conversation with a male and female respondent (A and B) with whom the interrogator could communicate only indirectly by typewritten text.

The object of this game was for the interrogator to correctly identify the gender of the players (A and B) purely as a result of such textual interactions. What makes the task non-trivial is that: (a) the respondents are allowed to lie; and (b) the interrogator is allowed to ask questions ranging over the whole gamut of human experience.

At first glance it is perhaps a little surprising that, even after numerous textual interactions, a skilled player can determine (more accurately than by chance) the correct gender of the respondents.

But in this sense Turing’s Victorian-esque parlour game describes a scenario not unfamiliar to situations that 21st century video-gamers encounter when participating in large multi-user virtual worlds – such as World of Warcraft or Second Life. Here in-game avatars controlled by real-world players can sometimes fail to reflect the gender they ostensibly appear to have; the controller may be female and the avatar male (and vice versa).

Turing then asked the question what will happen when a machine takes the part of (A) in this game: would the interrogator decide wrongly as often as when playing the gender imitation game? In one flavour of this game, which has become known as the ‘standard interpretation’ of the Turing Test, a suitably programmed computer plays as either the man or the woman and the interrogator (C) simply has to determine which respondent is the human and which is the machine.

In his 1950 paper Turing confidently predicted that by the year 2000 there would be computers with 1GB of storage (this turned out to be remarkably prescient) which would be able to pass the Turing Test; that is, perform such that the average interrogator would not have more than 70% chance of making the right identification after five minutes of questioning.

And sure enough on 6 September 2011 (merely 11 years off Turing’s prediction) the New Scientist magazine triumphantly announced: “software called Cleverbot has passed one of the key tests of artificial intelligence: the Turing Test at the Techniche festival in Guwahati, India.”

However, it is doubtful that the New Scientist’s announcement means that “general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (as Turing also predicted).

This is because: (a) there is suspicion that the experimental protocol did not, in the strictest sense, conform to any of the established interpretations of Turing’s test; and (b) in the 63 years since Computing Machinery andIntelligence was first published, the status of his test as a definitive measure of machine intelligence and understanding has been extensively criticised.

Perhaps the best known critique of purely ‘computational explanations of mind’ comes from the American philosopher John Searle.

In his infamous Chinese room argument Searle endeavours to show that even if a computer behaved in a manner fully indistinguishable from that of a human (when, say, answering questions about a simple story) it cannot be said to genuinely understand its responses and hence the computer cannot properly be said to genuinely think or instantiate mind.

If Searle is correct, then explanations of human thought will need to go much deeper than the sophisticated mimicry on offer from the current batch of ‘conversational’ computer programs. Indeed, much new research in the area now views cognition as an essentially embodied, enacted activity taking place by an agent fundamentally embedded in society. To be thus understood I believe that it is finally time to move away from Turing’s essentially computational metaphor of mind, towards more radical embodied, embedded, enactive and ecological approaches to cognition.