Providing AI with a solid ethical foundation that can not only protect, but also promote human rights and human dignity, is the goal of the UNESCO global standard.
The global standard on the ethics of artificial intelligence (AI) presented by the United Nations Educational, Scientific and Cultural Organization (Unesco) must be adopted by its more than 200 member states. This standard defines common values and principles that will serve as a guide for building the legal infrastructure necessary to ensure the healthy development of AI. In a statement, UNESCO said the use of AI has several benefits, but also challenges, including gender and ethnic biases, significant threats to privacy, dignity and autonomy, dangers of mass surveillance and the increasing use of unreliable AI technologies in law enforcement.
The standard recommends:
– Protect data by actions that go beyond what technology companies and governments are doing to guarantee individuals greater protection by ensuring the transparency, organization and control of their personal data. It indicates that individuals should all be able to access and even delete records of their personal data. It also provides for actions to improve data protection and individuals’ knowledge of their own data, as well as their right to control it. It also increases the ability of regulators around the world to enforce these provisions.
– Prohibit social scoring and mass surveillance. The standard explicitly prohibits the use of AI systems for social scoring and mass surveillance. UNESCO considers these types of technologies to be highly invasive, infringing on human rights and fundamental freedoms, and widely used. The recommendation stresses that when developing regulatory frameworks, Member States should consider that responsibility and accountability always rests ultimately with humans and that AI technologies should not be endowed with themselves of legal personality.
– Assist in monitoring and evaluation. Ethical impact assessment should allow countries and companies that develop and deploy AI systems to assess the impact of these systems on individuals, society and the environment. The method for assessing readiness should enable Member States to determine how ready they are in terms of legal and technical infrastructure. This tool will contribute to strengthening the institutional capacity of countries and will recommend the appropriate measures to be taken in order to guarantee the implementation of ethics in practice. Member States are also encouraged to consider establishing an independent AI ethics officer or other mechanism to oversee ongoing auditing and monitoring efforts.
– Environmental Protection. The standard recommends that governments assess direct and indirect environmental impact throughout the life cycle of the AI system. This includes its carbon footprint, energy consumption, and the environmental impact of extracting the raw materials needed to manufacture AI technologies. It also aims to reduce the environmental impact of AI systems and data infrastructures.