In recent years, machine learning has gained unparalleled attention. Giants such as Amazon, Facebook, IBM and Microsoft are investing billions of dollars in this area, whether in acquisitions or research and development. Thus, the number of patent applications in the field of machine learning has increased tenfold over the last 10 years. Many new products and features have appeared – machine learning allows Amazon to automatically optimize its prices, Microsoft offers live translation in Skype, and Google uses it to perform a more nuanced ranking of web pages based on their content to meet user requests.
1950s – The concept
The concept of automated learning was developed in the 1950s by artificial intelligence researchers who believed that the best way for computers to behave intelligently was to allow them to learn. Until then, research had focused on using human experts to write rules that computers could follow. However, writing effective rules was extremely difficult.
During the late 1950s and throughout the 1960s simple learning algorithms were developed and applied to problems that until then had been impossible to solve. Despite these promising at the beginnings, the 1970s and 1980s were marked by disappointment. Progress was slow, research poorly funded and companies showed little interest in applying automated learning to real problems.
However, at the end of the 1980s, significant progress was made: a method of training arbitrarily complex neural networks was discovered. In addition, the problem of optimal behaviour in a delayed gratification environment was solved by reinforcement learning.
1990s – The turning point
In the early 1990s, research on machine learning continued with more rigors at the mathematical level. This has allowed the development of new algorithms and new Kernel methods such as Bayesian neural networks, support vector machines (SVM) and Gaussian processes, which have significantly improved performance in the real world.
Automated learning was finally ready for commercial exploitation, and its early users were able to apply it to problems such as fraud detection, credit scoring and probability forecasting loss of customers. With the success of machine learning in the real world, its potential has clearly emerged and funding for research has increased. The field has begun to attract researchers in computer science, but also sectors such as mathematics and physics.
During the 1990s, storage space in a desktop computer was multiplied by a thousand, and the gigahertz run between AMD and Intel at the end of the decade offered a similar increase in processing power. By the year 2000, most desktops had enough memory to store the amount of data required to learn effective solutions to complex problems, and enough processing power to make this learning possible within a reasonable time. The technology had even reached the level necessary for automatic learning to be used in video games: to put the opponent on player level in Colin McRae Rally 2.0, from Codemasters, or to allow the player to form his own avatar by real time.
Mid-2000s – The advanced development
In the mid-2000s, the GPGPU (generic calculation on a graphics processor) became possible. Cheap multicore processors have been developed and the cloud has emerged, resulting in tremendous growth in processing power.
At the same time, the development of the Internet has facilitated the Big Data revolution: enormous data sets have become accessible, enabling more and more complex machine learning algorithms to be applied to real problems. Deep learning, where neural networks have many layers of interconnected neural (DNN, Deep Neural Networks), has become possible, and has quickly surpassed all previous approaches in fields such as voice and image recognition – Ending even beyond human performance.
The last decade has seen the large-scale commercial exploitation of machine learning. It is now the benchmark technology in a wide range of industries because it allows reliable extracting of extremely complex relationships from data, providing remarkable understanding and accurate predictions. The theoretical advances of the 1990s have made possible automatic learning algorithms such as Bayesian neural networks and support vector machines (SVM) that work well in environments where data is scarce. Developments at the end of the 2000s have made possible deep neural networks (DNN), which offer unsurpassed performance with Big Data.
Today, you will find automatic learning algorithms on almost all platforms: from processors integrated in a central heating system to huge cloud computing networks. They are used commercially for deciphering postal codes, processing checks, calculating credit risks and detecting fraud, for speech recognition, translating texts and recommending movies, for associating players in online games, optimizing pricing, sort search results online, filter spam, animate virtual characters, optimize data centers, play against humans in video games, determine image content, drive autonomous vehicles or identify content pirate.
And yet, this is only the beginning of what companies will be able to do with machine learning. There is no doubt that this technology will transform business processes and create entirely new product classes. Automatic learning, or machine learning, is no longer the preserve of researchers and academics. It has become a recognized tool in the corporate world.