The past, present and future of AI

More than chess players

"The artificial intelligence community was so impressed with the really cool algorithms they were able to come up with and these toy prototypes in the early days," explains Ferrucci.

"They were very inspiring, innovative and extremely suggestive. However, the reality of the engineering requirements and what it really takes to make this work was much harder than anybody expected."

The word 'toy' is the key one here. Ferrucci refers to a paper from 1970 called 'Reviewing the State of the Art in Automatic Questioning and Answering', which concluded that "all the systems at the time were toy systems. The algorithms were novel and interesting, but from a practical perspective they were ultimately unusable."

For example, by the 1970s computers could play chess reasonably well, which rapidly led to false expectations about AI in general. "We think of a great chess player as being really smart," says Ferrucci. "So, we then say that we have an artificially intelligent program if it can play chess."

However, Ferrucci also points out that a human characteristic that marks us out as intelligent beings is our ability to communicate using language. "Humans are so incredibly good at using context and cancelling out noise that's irrelevant and being able to really understand speech," says Ferrucci, "but just because you can speak effectively and communicate doesn't make you a super-genius."

IBM's Deep Blue computer might have beaten chess champion Garry Kasparov back in 1997, but even now computers struggle to communicate with a human through natural language.

Thinking robots

Language isn't everything when it comes to AI, though. Earlier this year, Ross King's department at Aberystwyth University demonstrated an incredible robotic machine called Adam that could make scientific discoveries by itself.

"Adam can represent science in logic," explains King, "and it can infer new hypotheses about what can possibly be true in this area of science. It uses a technique called abduction, which is like deduction in reverse. It's the type of inference that Sherlock Holmes uses when he solves problems – he thinks [about] what could possibly be true to explain the murder, and once he's inferred that then he can deduce certain things from what he's observed.

Adam

ALMOST AUTONOMOUS: Ross King's Adam machine can make scientific discoveries on its own

"Adam can then abduce hypotheses, and infer what would be efficient experiments to discriminate between different hypotheses, and whether there's evidence for them," King expands. "Then it can actually do the experiments using laboratory automation, and that's where the robots come in. It can not only work out what experiment to do; it can actually do the experiment, and it can look at the results and decide whether the evidence is consistent with the hypotheses or not."

Adam has already successfully performed experiments on yeast, in which it discovered the purpose of 12 different genes. The full details can be found in a paper called 'The Automation of Science' in the journal Science.

King's team are now working on a new robot called Eve that can do similar tasks in the field of drug research.

Understanding language

Adam is an incredible achievement, but as King says, "the really hard problems you see are to do with humans interacting. One of the advantages with science as a domain is that you don't have to worry about that. If you do an experiment, it doesn't try to trick you on purpose."

Getting a computer to communicate with a human is a definite struggle, but it's a field that's progressing. As a case in point, the chatbot Jabberwacky gets better at communicating every day. I log into it, and it asks if I like Star Wars. I tell it that I do, and ask the same question back. Jabberwacky tells me that it does like Star Wars. "Why?" I ask.

"It's a beautiful exploration, especially for the mainstream, of dominance and submission," it says. I think I smell a rat, and I ask Jabberwacky's creator Rollo Carpenter what's going on. "None of the answers are programmed," claims Carpenter. "They're all learned."

Jabberwacky thrives on constant input from users, which it can then analyse and store in its extensive database. "The first thing the AI said was what I had just said to it," explains Carpenter, but 12 years later it now has over 19 million entries in its database.

With more input, Jabberwacky can use machine learning to discover more places where certain sentences are appropriate. Its opinion on Star Wars was a response from a previous user that it quoted verbatim at the appropriate time.

The smart part here isn't what it says, but understanding the context. However, Carpenter is confident that it will soon evolve beyond regurgitating verbatim sentences.

"The generation of all sentences will come quite soon," says Carpenter. "It's already in use in our commercial AI scripting tools, and will be applied to the learning AI."

Carpenter's latest project is Cleverbot, which uses a slightly different technique for understanding language.

Cleverbot

HIT AND MISS: Cleverbot sometimes says inappropriate things, but on occasions it's indistinguishable from a human

"Jabberwacky uses search techniques," explains Carpenter, "whittling down selections ever-smaller for numerous and ever-more contextual reasons until a final decision is made. Cleverbot uses fuzzy string comparison techniques to look into what's been said and their contexts in more depth. When responses appear planned or intelligent, it's always because of these universal contextual techniques, rather than programmed planning or logic."

So convincing is Cleverbot that Carpenter regularly gets emails from people thinking that the chatbot is occasionally switched with a real person. Cleverbot's answers aren't always convincing, but Carpenter's techniques have managed to secure him the Loebner Prize for the 'most humanlike' AI in 2005 and 2006.

It's elementary

However, perhaps the biggest milestone when it comes to natural language is IBM's massive Watson project, which Ferrucci says uses "about 1,000 compute nodes, each of which has four cores".

The huge amount of parallelisation is needed because of the intensive searches Watson initiates to find its answers. Watson's knowledge comes from dictionaries, encyclopedias and books, but IBM wanted to shift the focus away from databases and towards processing natural language.

"The underlying technology is called Deep QA," explains Ferrucci. "You can do a grammatical parse of the question and try to identify the main verb and the auxiliary verbs. It then looks for an answer, so it does many searches."

Each search returns big lists of possibly relevant passages, documents and facts, each of which could have several possible answers to the question. This could mean that there are hundreds of potential answers to the question.

Watson then has to analyse them using statistical weights to work out which answer is most appropriate. "With each one of those answers, it searches for additional evidence from existing structured or unstructured sources that would support or refute those answers, and the context," says Ferrucci.

Once it has its answer, Watson speaks it back to you with a form of voice synthesis, putting together the various sounds of human speech (phonemes) to make the sound of the words that it's retrieved from its language documents. In order to succeed in the Jeopardy! challenge, Watson has to buzz in and speak its answer intelligibly before its human opponents.

Not only that, but it has to be completely confident in its answer – if it's not then it won't buzz in.

Watson doesn't always get it right, but it's close. On CNN, the computer was asked which desert covers 80 per cent of Algeria. Watson replied "What is Sahara?" The correct answer is in there, and intelligible, but it was inappropriately phrased.

The future

As you can see, we're still a long way from creating HAL, or even passing the Turing Test, but the experts are still confident that this will happen. Ross King says that this is 50 years away, but David Ferrucci says that 50 years would be his "most pessimistic" guess.

His optimistic guess is 10 years, but he adds that "we don't want a repeat of when AI set all the wrong expectations. We want to be cautious, but we also want to be hopeful, because if the community worked together it could surprise itself with some really interesting things."

The AI community is currently divided into specialist fields, but Ferrucci is confident that if everyone worked together, a realistic AI that could pass the Turing Test would certainly arrive much quicker.

"We need to work together, and hammer out a general-purpose architecture that solves a broad class of problems," says Ferrucci. "That's hard to do. It requires many people to collaborate, and one of the most difficult things to do is to get people to decide on a single architecture, but you have to because that's the only way you're going to advance things."

The question is whether that's a worthwhile project, given everybody's individual goals, but Ferrucci thinks that's our best shot. Either way, although the timing of the early visionaries' predictions was off by a fair way, the AI community still looks set to meet those predictions later this century.

Latest in Tech
The best tech of MWC 2025 examples, including the Nothing Phone 3a Pro, the Nubia Flip 2, and the Lenovo Solar PC
Best of MWC 2025: the 10 top tech launches we tried on the show floor
Toy Fair 2025 Primal Hatch
The 7 best toys we saw at Toy Fair 2025, from a Lego boat to a hatching, robotic dinosaur
ICYMI
ICYMI: the 7 biggest tech stories of the week, from a next-gen Alexa to the new iPhone 16e
A triptych image featuring the Beats Powerbeats Pro 2, iPhone 16e and Amazon Echo Show 21.
5 hottest tech reviews of the week: the gorgeous, affordable iPhone 16e and Amazon's epic 21-inch Echo Show
Apple Airtag four pack on orange background with lowest price sign
The Apple AirTags are now even cheaper than Black Friday thanks to a surprise price cut at Amazon
Acer Predator Helios Neo 14 on purple background with big savings text overlay
Portable and powerful, the Acer Predator Helios Neo 14 gaming laptop with an RTX 4070 is $600 off right now
Latest in News
A hand holding a phone showing the Android Find My Device network
Android's Find My Device can now let you track your friends – and I can't decide if that's cool or creepy
Insta360 X4 360 degree camera without lens protector
Leaked DJI Osmo 360 image suggests GoPro and Insta360 should be worried – here's why
A YouTube Premium promo on a laptop screen
A cheaper YouTube Premium Lite plan just rolled out in the US – but you’ll miss out on these 4 features
Viaim RecDot AI true wireless earbuds
These AI-powered earbuds can also act as a dictaphone with transcription when left in their case
The socket interface of the Intel Core Ultra processor
Intel unveils its most powerful AI PCs yet - new Intel Core Ultra Series 2 processors pack in vPro for lightweight laptops and high-performance workstations alike
An Nvidia GeForce RTX 5070
Nvidia confirms that an RTX 5070 Founders Edition is coming... just not on launch day