Here are some highlights in the history of artificial intelligence (AI), when a computer has managed to get the upper hand on the human… in a game!
1997 – Deep Blue, chess master
Developed by IBM in the early 1990s, Deep Blue is a supercomputer designed to help decision-making in medicine, finance, and education.
With its mathematical and algorithmic abilities, Deep Blue defeated world chess champion Garry Kasparov after losing to him a year earlier. Note that the multiple-applications computer can calculate 200 million hits per second.
Did you know? The creators of Deep Blue were not chess players. This victory was that of computer science and mathematics.
2011 – Watson, Jeopardy champion
The supercomputer launched by IBM in the 2000s beats its human competitors at the game show Jeopardy! after absorbing 200 million pages of encyclopedias, dictionaries, articles, books, etc.
But Watson does not just play Jeopardy. He can assimilate data specific to various fields, from hospitals to banks and law firms, to help solve specific problems. Some 500 start-up companies bought it and adapted it to their needs.
The supercomputer is programmed not only to analyze data but also to recognize words, images, make predictions and even identify emotions or converse, decoding the language and tone of the other person. Watson now operates in several languages, including French.
On the subject of AI, this is how IBM’s Watson project leader, Rob High, sees things: “In many ways, AI is not about replicating the human mind. Frankly, we’ve got plenty of human minds out there already, and from an economic standpoint, replicating the human mind is probably either not useful, but it’s certainly nowhere near as plausible in terms of the current technology. What it is, on the other hand, is about how to recognize the human limits.”
Did you know? Watson was designed largely in Quebec, at the IBM plant in Bromont, including 2000 of its components, including all its microprocessors.
2016 – AlphaGo
Designed by DeepMind (Google), AlphaGo is an algorithm that succeeded in 2016 in winning the game of Go against one of the best players in the world through learning reinforcement. Programming a computer to play Go is particularly complex, more than playing chess since the number of possible combinations is much higher.
Released in 2017, the latest version of AlphaGo learns without human data, playing against itself through its network of artificial neurons.
Did you know? In 2017, the creators of AlphaGo announced that they would move to Montreal. DeepMind researchers believe that the method used in the latest version of AlphaGo could have applications in various areas, including reducing energy consumption or designing new materials.