Robot, voice assistance, instant translation, visual recognition, automated planning and learning, stock market, autonomous car… applying to these and many other fields, innovations related to Artificial Intelligence (AI) are already exist behind the scenes of our daily lives. But its advances also generate possibilities of excesses against which communities of scientists are mobilized.
“Artificial intelligence is a scientific discipline that seeks problem-solving methods with high logic or algorithmic complexity. By extension, it means, in everyday language, devices that imitate or replace humans in certain implementations of their cognitive functions. “(From Wikipedia)
For digital science researchers, Artificial Intelligence is just one keyword that encompasses eight major sub-domains:
- Knowledge (knowledge bases, semantic web, ontologies, etc.)
- Automatic learning (supervised learning, neural networks, massive data analysis, etc.)
- Natural language processing
- Signal processing (speech, object recognition and localization, image recognition, etc.)
- Robotics including autonomous vehicles
- Neuroscience and Cognitive Science
- Algorithms (logical programming, causal reasoning, planning, etc.)
- Help with the decision
Already Very Concrete Applications
Autonomous car is undoubtedly one of the most obvious demonstrations of AI for the evolution of our daily life. Just last year, BMW gave an example at the CES 2017 with its concept car “i Inside Future”, designed to run in 5 to 10 years.
Today, in retail, machine learning is regularly used to select customers or analyze their behaviours. The robots are experienced to handle their home or personal supply in several sectors. As for voice recognition, it is part of our tools, easily accessible from a smartphone.
Is AI a Threat?
In 1997, 40 years after the first intuitions of the mathematician Alan Turing, a machine managed to beat the multiple world chess champion Garry Kasparov. In 2016, DeepMind’s AlphaGO program has demonstrated by many victories its superiority and self-learning ability against the stars of the planet.
The potential of these technological breakthroughs compared to human thinking is obviously a big issue, and gives rise to many debates and positions:
- Is improving human physical and intellectual capacities with new technologies a progress or a danger for humanity?
- How to master high-frequency trading and the risks of financial crisis?
- What respect for personal data?
- What will be the socio-economic impact of robots?
Far from being futuristic debates, the impact of Artificial Intelligence already justifies these questions. Recent advances in Artificial Intelligence are such that we will soon see robots capable of doing almost everything man does. Tens of millions of jobs will disappear over the next 30 years. But it is certainly not the only, nor the most dangerous application of Artificial Intelligence. Leading scientists are alerting public opinion and demanding a moratorium to prevent abuses in this area, which could lead humanity to its loss.
For the Ethics of Artificial Intelligence
These questions led scientists to demand the creation of an ethical framework for the development of Artificial Intelligence, as well as guarantees for security in the years to come. Some believe Artificial Intelligence potentially more dangerous than nuclear weapons!
For Wendel Wallach, an expert on ethics at Yale University, these dangers demand a global response. He also called for a presidential decree declaring lethal autonomous weapon systems in violation of international humanitarian law:
“The basic idea is that there is a need for concerted action to keep technology as a good servant and not to let it become a dangerous master…”