Artificial intelligence: from manual programming to deep learning

If artificial intelligence (AI) rhymes with “learning” today, just a few years ago things looked very different. Although it was possible to identify print characters, play chess or make medical diagnoses using logical inferences from experts, the AI at the time was, however, laborious and limited since it required manual programming.

The beginnings of autonomy

At the beginning of the 2010s, technical and algorithmic developments have allowed improvements in the AI performances, in particular in machine learning; a process by which a computer can improve itself based on the results it gets while performing a task.

The most widely used machine learning technique, supervised learning, consists of providing the computer with a learning database built on labeled classification models and examples (e.g., the image of a tree is associated with the “tree” label). The computer can thus end up identifying elements by referring to the characteristics of thousands or even millions of components that make up its database.

Form recognition has developed lately as well, a classification system that allows the computer to identify different types of computerized “patterns,” not just visual ones – objects or images – but also sounds (speech recognition) and others (medical information, satellite scans, etc.). The problem with pattern recognition is that it is difficult to develop a good feature extractor, and each new application has to go through a thorough review process.

Deep learning: a revolution

In the early 2000s, researchers Geoffrey Hinton, Yann LeCun and Yoshua Bengio decided to re-examine the potential of digital artificial neural networks, a technology abandoned by research from the late 1990s to the beginning of the 2010s. The trio of researchers “invents” deep learning, which is now the most promising branch of AI, reviving the interest in this field of technology.

Inspired by the functioning of the human brain, these networks of artificial neurons, optimized by learning algorithms (set of rules), perform calculations and operate according to a system of layers; the results of each layer serving successive layers, hence the qualifier “deep.” While the first layers extract simple features, the subsequent layers combine them to form concepts that become more complex.

The principle of this technology is to let the computer find by itself the best way to solve a problem from a considerable amount of data and indications concerning the expected result. Deep learning can use supervised learning as well as unsupervised learning.

The great revolution brought about by deep learning is that the tasks asked of the computer are now substantially based on its principles or algorithms. Whereas before AI knowledge was subdivided into several types of applications, studied in silos, efforts are now more concerted to understand the learning mechanisms.

The turning point 2011-2012

Five milestones for deep learning

  1. Graphical Processing Units (GPUs) capable of processing more than a thousand billion operations per second become available for less than $2000 per card. These very powerful specialized processors, initially designed for video game rendering, have proven to be highly efficient for neural network calculations.
  2. Experiments conducted by Microsoft, Google and IBM, with the collaboration of Geoffrey Hinton’s lab at the University of Toronto, demonstrate that deep networks can halve the error rates of speech recognition systems.
  3. As part of Google Brain, a deep learning research project led by Google, AI manages to learn to “recognize” a cat image among 10 million digital images from YouTube.
  4. Google uses artificial neural networks to improve its speech recognition tools.
  5. Convolutional neural networks – inspired by the visual cortex of mammals – pulverize records in image recognition by drastically reducing the error rate. Geoffrey Hinton’s Toronto victory at the prestigious ImageNet Image Recognition Competition confirms the potential for deep Most researchers in speech and vision recognition then turn to convolutional networks and other neural networks.

Massive investments from the private sector have followed in the subsequent years.

What can a computer learn to recognize through deep learning?

  • Visual elements, such as shapes and objects in an image. It can also identify the people in the image and specify the type of scene in question. In medical imaging, this can allow, for example, to detect cancer cells.
  • Sounds produced by speech that can be converted into words. This feature is already included in smartphones and digital personal assistance devices.
  • The most common languages – to translate them.
  • Elements of a game to take part in … and even win against a human opponent.

Yoshua Bengio at Concordia

A global star in artificial intelligence, Montreal-based Yoshua Bengio is one of the keynote speakers in the Concordia President’s Speaker Series on Digital Futures. The session will be held on April 24, and, like all other events in this series, is open to the general public and free.

Dr. Bengio is one of the leaders in deep learning, a technique that involves developing the ability of a computer to “learn on its own” through artificial neural networks. Author of the best-selling book on the subject, he is also one of Canada’s most-cited experts. Combining multiple titles, he is notably a professor at the Department of Computer Science and Operations Research at the Université de Montréal, director of the Montreal Institute for Learning Algorithms (MILA), co-director of the Learning in Machines and Brains program from the Canadian Institute for Advanced Research, and Canada Research Chair in Statistical Learning Algorithms.

In addition to his contribution to research, Mr. Bengio’s mission is to popularize his field of expertise with companies and get involved in the debate on ethical issues related to AI (Declaration of Montreal on responsible AI).

The saw-tooth path of artificial neurons

The first network of artificial neurons dates from the late 1950s. The “perceptron” – that was its name – could identify simple forms. A decade or so later, research is losing interest in the neural angle of AI after doubts about its possibilities have been raised by scientists at the Massachusetts Institute of Technology (MIT).

In the 1970s and 1980s, Geoffrey Hinton, Yann LeCun, and Kunihiko Fukushima created multi-layer digital artificial neural networks. Inspired by the visual cortex of mammals, these networks allowed computers to learn tasks that are more complex. The mass of data available and the computing power remained however insufficient, this technology being neglected for several years.

Catherine Meilleur

Author:
Catherine Meilleur

Creative Content Writer @KnowledgeOne. Questioner of questions. Hyperflexible stubborn. Contemplative yogi.

2018-04-13T09:14:52+00:002018/04/13|Articles, Catherine Meilleur|0 Comments

Leave A Comment

This website uses cookies and third-party services. OK