Deep Learning is a form of Artificial Intelligence, derived from Machine Learning. To understand what Deep Learning is, it is important to understand what Machine Learning is.
Definition and origins of Deep Learning
The Machine Learning concept dates back to the middle of the 20th century. In the 1950s, the British mathematician Alan Turing imagined a machine capable of learning, a “Learning Machine”. Over the next few decades, different Machine Learning techniques were developed to create algorithms that could learn and improve independently.
The artificial neural networks
These techniques include artificial neural networks. These algorithms are the basis of Deep Learning, but also technologies such as image recognition or robotic vision. Artificial neural networks are inspired by the neurons of the human brain. They consist of several artificial neurons connected to each other. The higher the number of neurons, the deeper the network is.
Deep Learning: functioning
In the human brain, each neuron receives about 100,000 electrical signals from other neurons. Each active neuron can produce an exciting or inhibiting effect on those to which it is connected. In an artificial network, the principle is similar. Signals travel between neurons. However, instead of an electrical signal, the neural network assigns some weight to different neurons. A neuron that receives more charge will exert more effect on adjacent neurons. The final layer of neurons responds to these signals.
To understand how Deep Learning works, let’s take a concrete example of image recognition. Imagine that the neural network is used to recognize photos that contain at least one cat. In order to be able to identify the cats in the photos, the algorithm must be able to distinguish the different types of cats, and to recognize a cat in a precise way whatever the angle under which it is photographed.
In order to achieve this, the neural network must be trained. To do this, it is necessary to compile a set of training images to practice Deep Learning. This set will gather thousands of pictures of different cats, mixed with images of objects that are not cats. These images are then converted into data and transferred to the network. Artificial neurons then assign a weight to the different elements. The final layer of neurons will then gather the different information to deduce whether or not it is a cat.
The neural network will then compare this response to the correct answers given by humans. If the answers match, the network keeps this success in memory and will use it later to recognize cats. In the opposite case, the network takes note of its error and adjusts the weight placed on the different neurons to correct its error. The process is repeated thousands of times until the network is able to recognize a cat on a photo in all circumstances. This learning technique is called supervised learning.
Another learning technique is unsupervised learning. This technique relies on data that is not labelled. Neural networks need to recognize patterns within datasets to learn for themselves which elements of a photo may be relevant.
Deep Learning: how neural networks have evolved in ten years
Other popular Machine Learning techniques include Adaptive Boosting or AdaBoost. This technique, introduced in 2001 by Paul Viola and Michael Jones of Mitsubishi Electric Research Laboratories, can detect faces in real time on an image. Rather than relying on a network of interconnected neurons, AdaBoost filters an image from a set of simple decisions to identify faces.
This technique and others have almost forgotten neural networks. However, thanks to the explosion of the number of labelled data, neural networks have returned to the forefront. In 2007, a database of millions of labelled images from the Internet, ImageNet, was launched. Thanks to services like Amazon Mechanical Turk, offering users two cents for each tagged image, the database was quickly powered up. Today, ImageNet brings together 10 million labelled images.
Neural networks have also evolved and now contain many more layers. The deep learning of Google Photos includes for example 30 layers. Another massive evolution is that of convolutional neural networks. These networks are not only inspired by the functioning of the human brain, but also by the visual system.
Within such a network, each thickness applies a filter on the images to identify patterns or specific elements. The first thicknesses detect the main attributes, while the last thicknesses identify the most subtle details and organize them into concrete elements. Thus, these convolutional networks are able to identify highly specific attributes, such as the shape of the pupils or the distance between the nose and the eyes, in order to recognize a cat with incredible precision.
Deep Learning Example (Machine Learning): What’s the point?
Deep Learning has many uses. It is this technology that is used for facial recognition of Facebook for example, to automatically identify your friends on photos. It is also this technology that allows Face ID facial recognition of Apple’s iPhone X to improve over time. As previously explained, machine learning is also the central technology of image recognition.
To translate oral conversations in real time, software such as Skype or Google Translate also rely on machine learning. It is also thanks to Deep Learning that the Google Deepmind AlphaGo artificial intelligence has managed to triumph over the world champion. In recent years, with the advent of convolutional neural networks, Deep Learning is at the heart of computer vision and robotic vision.
Since Deep Learning made of artificial neural networks mimics the functioning of the human brain, the possibilities offered by this technology will increase as we discover the secrets of our own organ. By understanding the algorithm on which the human brain rests, and the ways evolution has evolved over time to understand images, reverse engineering will enable us to bring the potential of the human brain to artificial networks.