In the early 2000s, researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio decided to re-examine the potential of digital artificial neural networks, a technology abandoned by research from the late 1990s to the beginning of the 2010s. The trio of researchers “invents” deep learning, which is now the most promising branch of AI, reviving the interest in this field of technology.
Inspired by the functioning of the human brain, these networks of artificial neurons, optimized by learning algorithms (set of rules), perform calculations and operate according to a system of layers; the results of each layer serving successive layers, hence the qualifier “deep.” While the first layers extract simple features, the subsequent layers combine them to form concepts that become more complex.
The principle of this technology is to let the computer find by itself the best way to solve a problem from a considerable amount of data and indications concerning the expected result. Deep learning can use supervised learning as well as unsupervised learning.
The great revolution brought about by deep learning is that the tasks asked of the computer are now substantially based on its principles or algorithms. Whereas before, AI knowledge was subdivided into several types of applications, studied in silos, efforts are now more concerted to understand the learning mechanisms.
What can a computer learn to recognize through deep learning?
- Visual elements, such as shapes and objects in an image. It can also identify the people in the image and specify the type of scene in question. In medical imaging, this can allow, for example, to detect cancer cells.
- Sounds produced by speech that can be converted into words. This feature is already included in smartphones and digital personal assistance devices.
- The most common languages – to translate them.
- Elements of a game to take part in … and even win against a human opponent.
Our detailed article on this subject:
Related articles:
- Mini glossary of artificial intelligence
- Initiatives for a Responsible and Human-Centered Artificial Intelligence
- AI, make me laugh!
- Will a robot replace your job?
- Human vs. machine battle
- Artificial intelligence: Montreal, the star of the moment
- Intelligent Adaptive Learning: Everyone’s Training!
- Mini glossary of technology in learning
- The Web from 1.0 to 4.0
Author:
Catherine Meilleur
Communication Strategist and Senior Editor @KnowledgeOne. Questioner of questions. Hyperflexible stubborn. Contemplative yogi
Catherine Meilleur has over 15 years of experience in research and writing. Having worked as a journalist and educational designer, she is interested in everything related to learning: from educational psychology to neuroscience, and the latest innovations that can serve learners, such as virtual and augmented reality. She is also passionate about issues related to the future of education at a time when a real revolution is taking place, propelled by digital technology and artificial intelligence.