What are the New Threats in the Race of Artificial Intelligence?
The race for Artificial Intelligence is launched. Like any technology, AI is not a risk in itself; it is its use that will determine the level and nature of the threats it is likely to cause.
AI opens fabulous perspectives in the medical field, in the transport sector, even the protection of the environment. Nevertheless, these latest developments also draw a new spectrum of threats that we will attempt to outline, focusing on three specific types: military ethics, socio-political and geopolitical.
Military opportunities, ethical issues: autonomous weapons
In the military field, the use of AI is considered mainly from two angles, that of collaborative combat (between man and the “machine”) and that of autonomous weapons. It is the latter, also known as Lethal Autonomous Weapons (LAWs) that arouse the most serious fears. Several international organizations, including the International Committee of the Red Cross and UNESCO, have identified the ethical and legal issues raised by such weapons. To summarize, at the current stage of AI progress, LAWs would be unable, according to them, to respect three of the fundamental principles of international humanitarian law, or the law of war: the principles of proportionality, discrimination and responsibility.
In an emergency situation, it is unlikely, indeed, that an autonomous weapon is always able to engage an attack proportionate to the threat facing it, given the many parameters to integrate which are for the most part difficult – translatable in the algorithmic terms. Then it would be tricky for an autonomous robot to distinguish a fighter from a non-combatant, or a terrorist from an armed civilian protecting their home. What decision would a “killer robot” make to a child pointing a dummy weapon at soldiers? This choice requires cognitive abilities of reflection, abstraction, and even empathy, which far exceed the current level of elaboration of AI algorithms.
Finally, in case of “error of judgment” of the robot, who should be held responsible? Will the operator, designer, manufacturer, staff or policy makers have approved its use? Faced with so much uncertainty, these organizations call for a total ban on LAWs, the maintenance of the human being in the decision-making loop and a strict regulation of the use of AI, in order to prevent it from becoming accentuated, the domineering desires of the States, as much on their external adversaries as on their own population.
AI and Big Brother state
Some states are likely to use AI for surveillance, regulation and even social control: the revelations of Edward Snowden and Wikileaks have amply demonstrated this, including in democratic regimes. But it is in China that this ambition is most obvious: the data produced by the 800 million Internet users are processed by major platforms that put them at the service of the Chinese state for purposes of social regulation. According to their behaviours, detected and analyzed with software for automated processing of texts, images and sounds, citizens now receive “notes” according to which their level of freedom is higher or lower. The rating system of Sesame Credit, one of the largest lending platforms in China, a subsidiary of Alibaba, evaluates each user in five areas: their borrower history; their ability to keep his commitments; their personal characteristics; their preferences and behaviour; their social relations. All these elements are collected by Alibaba from the use of the services of its subsidiaries (Taobao, Alipay) which have particularly close ties with the Chinese State.
As it appropriates the technological applications of AI, the power of the state becomes elusive and yet omnipresent, dispersed in all the interstices of social life via digital channels. In China, in particular, society as a whole is placed under a disciplinary apparatus, in the manner of Jeremy Bentham’s panopticon revisited by Michel Foucault. With regimes deeply inclined to omniscience, AI can become an irresistible instrument for controlling public debate and neutralizing any spirit of deviance. It is these fabulous potentialities of technology that excite the greed of states, whose ambitions in terms of power go beyond national boundaries.
Towards a new world order?
The formidable promises of the AI, especially in economic, police and military, could eventually upset the socio-political structure of states, but also the grammar of international relations, and this essentially for two reasons.
On one hand, China seems well placed to become a leader in the field from 2025-2030. Released in July 2017, its “next-generation AI development plan” has an annual budget of $22 billion, which is expected to reach $59 billion by 2025; According to a PwC study, the AI could increase China’s GDP by 26% by 2030. In addition, China has the financing, technological firms (BATX) and data needed to catch up and overtake the United States in this area in the coming decade.
On the other hand, the AI is unique in that its recent advances do not originate in the public sector; they emanate from the private sphere. So that states are partially dependent on companies that acquire a central position, especially in the military innovation process. From the Chinese “military-civilian fusion” program to the American Maven project, states are aware of the fundamental role of technological firms in the development of AI and its application for economic and military purposes. This interpenetration of commercial and military ambitions, private and public, on a global scale, gives these companies a disproportionate role: States place the sovereign choice of their strategic orientations in the hands of actors driven by their particular interests.
The central position of technology firms in the race for AI is also due to the dual nature of this repertoire of techniques, which are known to be easily transposed from civil to military. This is evidenced by the semi-autonomous South Korean (SGR-A1) sentinels, located along the border between the two Koreas, capable of detecting, targeting and even firing without human intervention, which were developed by Samsung as early as 2006.
The result is a network of close interrelations between the commercial and the military, which increases the risk of a wide and uncontrolled spread of these AI technologies. However, we know how terrorist groups, like Daesh, seized civil technological tools, such as social networks or encryption software, to deploy their propaganda and coordinate certain attacks: terrorists could just as easily use vehicles autonomous bodies filled with explosives to commit their attacks. Advances made by the so-called “neural networks” technique, of which AlphaGo is the best example, would enable them to detect and exploit the vulnerabilities of certain computer systems, such as the cooling mechanism of a nuclear power plant,
Like any technology, AI is intrinsically amoral: it is its uses that determine its level of threat. Obviously, these challenges respond to the potential benefits that we have voluntarily chosen not to deal with. It remains that with all its potentialities, AI requires a demanding regulatory framework that will be difficult to implement, as AI arouses everyone’s Promethean fantasy of complete hegemony.