In 2019, the “Nobel Prize in Computer Science”, the Turing Prize, was awarded to three pioneers of deep learning, including Montrealer Yoshua Bengio. This technology has had a rollercoaster ride marked by several “winters”: a first in the mid-1960s, a second in the mid-1970s, and a third in the early 2000s. However, other, more promising events have enabled it to become the ubiquitous and promising technology it is today. Let’s take a closer look at the turning points of this amazing trajectory.
The pivotal moments
1951. Marvin Minsky and Dean Edmonds, two doctoral students in mathematics, are building the neural network simulator, the SNARC (for stochastic neural analog reinforcement calculator) at Harvard, implementing the Hebb rule, which states that when two neurons are activated at the same time their synapse (functional contact) is reinforced. Although he devoted his thesis to his invention, Minsky did not consider it promising.
1957. The American psychologist and project engineer Frank Rosenblatt invented the perceptron, the first machine learning algorithm and the simplest form of artificial neural network. Although Rosenblatt describes his machine as “the first to have an original idea,” in reality the perceptron acts as a linear and binary classifier for categorizing data. Nevertheless, this is the first model for which a learning process can be defined, a crucial innovation for the development of machine learning.
The decades 1970-1980. Kunihiko Fukushima, Yann LeCun, and Canadians Geoffrey Hinton and Yoshua Bengio are all contributing to the creation of multi-layered digital, artificial neural networks. Inspired by the visual cortex of mammals, these networks allow the computer to learn more complex tasks. This is the birth of deep learning. However, this technology was neglected for several years, as the computing power and the mass of data were insufficient.
2010. The stars are aligned for deep learning to take off again: more affordable graphics processors with high computing power are entering the market, while the explosion of the Internet is enabling the rise of Big Data, essential for training computers to “learn” – since to finally “recognize”, for example, a tree image, an AI must be “fed” tens of thousands of tree images. Note that to learn how to perform more complex tasks, an AI needs hundreds of millions of images.
2011-2012. These two years mark a turning point for deep learning when five decisive events occur.
Graphics processors capable of performing more than a trillion operations per second become available for less than $2000 per card. Originally designed for the graphic rendering of video games, they are proving to be highly efficient for neural network calculations.
Experiments conducted by Microsoft, Google and IBM, in collaboration with Geoffrey Hinton’s laboratory, show that deep networks can halve the error rates of speech recognition systems.
AI learns to “recognize” a cat image among 10 million digital images from YouTube as part of the Google Brain research project.
Google improves its speech recognition tools by using artificial neural networks.
Convolutional neural networks – inspired by the visual cortex of mammals – break records in image recognition by significantly reducing the error rate. The victory of Geoffrey Hinton and his Toronto team in the prestigious “ImageNet” object recognition competition confirms the potential of deep learning. This victory, which led to today’s AI boom, opened the door to massive private sector investment in the following years and prompted researchers in speech and vision recognition to turn to deep learning.
2016. The AlphaGo program from Google DeepMind beats one of the best Go players in the world, the South Korean Lee Sedol. The computer program scored a second significant victory in the same game the following year against the reigning world champion Ke Jie of China. Also in 2016, the influential book Deep Learning, co-authored by Yoshua Bengio, is published. Three years later, this book was the first in the world to be translated by an AI: it took him 12 hours of work to translate the 800 pages into the language of Molière, of which only 15% had to be rewritten by human intelligence…
2019-2020. The three pioneers of deep learning Yann LeCun, Geoffrey Hinton and Yoshua Bengio receive the Turing Prize in 2019, considered the “Nobel Prize for Computer Science”. Note that Montrealer Yoshua Bengio, one of the world’s top three AI experts, has chosen to pursue his activities as a researcher, professor, businessman and committed citizen in the Quebec metropolis. He is partly responsible for the influence and attraction of Montreal as a “crossroads of deep intelligence”, with the largest AI university community on the planet (see Artificial Intelligence: Montreal, the Star of the Moment).
- Artificial intelligence: from manual programming to deep learning
- Mini glossary of artificial intelligence
- Artificial intelligence: Montreal, the star of the moment
- Artificial Intelligence: 3 Promising Innovations From Quebec
- AI, make me laugh!
- Human vs. machine battle
- Next-generation personalized learning
- Will a robot replace your job?
Catherine Meilleur has over 15 years of experience in research and writing. Having worked as a journalist and educational designer, she is interested in everything related to learning: from educational psychology to neuroscience, and the latest innovations that can serve learners, such as virtual and augmented reality. She is also passionate about issues related to the future of education at a time when a real revolution is taking place, propelled by digital technology and artificial intelligence.