Artificial Intelligence: 12 Steps to Protect Humanity
Artificial Intelligence (AI) presents many opportunities for humanity, but also brings its share of threats and dangers. Artificial intelligence is growing at a crazy speed. This technology continues to evolve and surprises each day a little more by the scope of the possibilities it offers. Its potential seems simply unlimited.
In fact, more and more companies from all industries are adopting it. Today, AI is used to assist decision-making, to detect fraud attempts, to try criminals, or even to create works of art. It seems certain that Artificial Intelligence will be ubiquitous In the future.
Certainly, it goes without saying that AI can make the world better and improve our lives every day. However, this fantastic technology also brings its share of threats and dangers. In order to reduce these risks and maximize the benefits, a set of measures has just been presented in Brussels on October 23, 2018 as part of The Public Voice: AI, Ethics, and Fundamental Rights meeting.
Artificial Intelligence: 12 measurements approved by more than 200 experts
These 12 measures aim to protect humanity from the various threats generated by AI, to “inform and improve its design and use”. More than 200 experts and 50 companies contributed to the development of these recommendations. These are rights, obligations and prohibitions:
- The right to transparency: all individuals must have the right to know the basis of a decision made by an AI concerning them. They must be able to access the factors, logic and techniques that produced this decision.
- The right to human determination: All individuals must have the right to a final determination made by a person. The AI must not be able to make decisions alone.
- The obligation of identification: all the institutions in charge of an Artificial Intelligence system must be publicly known.
- The obligation of justice: institutions must ensure that Artificial Intelligence systems do not reflect unfair prejudices of their creators and do not make discriminatory decisions.
- Liability: An AI system can only be deployed after an assessment of its purpose, objectives, risks, and benefits. Institutions must be held responsible for decisions made by an AI system.
- The obligation of precision, reliability and validity: institutions must ensure the accuracy, reliability and validation of decisions made by an AI.
- The obligation of data quality: the institutions must reveal the source of the data and ensure the quality of the data provided to their algorithms.
- Public Safety Obligations: Institutions must implement security controls to address the risks of deploying AI systems that control or direct physical devices.
- Cyber Security Requirement: Institutions need to secure their AI systems against cybersecurity threats.
- The prohibition of secret profiling: institutions must in no case set up or maintain a secret profiling system.
- The ban on unit scoring: no government should set up or maintain a scoring system for its citizens or residents based on Artificial Intelligence. This is a direct challenge for China to use AI and Big Data to assess its citizens and grant or deny them privileges.
- The obligation of suppression: if the control of an AI by a human is no longer possible, the institution that created it must be obliged to remove it.
Following the proposal of these measures in Brussels, the American group of protection of the privacy Electronic Privacy Information Center (EPIC) wishes that the American government adopts them at the scale of the United States.
A letter was sent to the National Science Foundation, which opened a few months ago to proposals for laws on Artificial Intelligence. EPIC emphasizes that the 12 proposed principles are consistent with the seven strategies already deployed by the United States at the moment, which may facilitate their adoption. However, only the White House Office of Science and Technology Policy is really in a position to decide on the adoption of these measures.