Ethical AI Principles — AI the right way

Angelo Dalli
5 min readNov 25, 2018

Artificial Intelligence can be a source of positive impact and good for society — if done right. As with all powerful technology, there can also be a lot of scope for abuse if ethics and sound principles are not kept in mind from the very beginning.

This brief article lays out my personal thoughts on the main principles and guidelines for AI researchers. This is intended to be a brief overview for generally accepted principles that will lead to a right implementation of AI.

Main Principles

The main principles are that AI should have a positive impact on society and not lead to harm to human beings and to be implemented in an ethical manner.

Guidance Sources

The following sources are considered to offer reliable guidance:

  • The Asilomar principles, which are supported by over 3,000 AI leaders world-wide, including myself.
  • The House of Lords AI report to the UK government on the future of Artificial Intelligence and its social and economic implications.

Guidance Points

I thought about what are the most pertinent points for an ethical AI framework from various sources, drawing heavily from the Asilomar principles (and roughly following its headings), while adding my own judgement from almost 20 years of experience in the field, to come up with a summary of guidance points:

Research: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence that impacts society positively and benefits society in general as much as possible, while minimising the side effects of unintended consequences.

Investments: Investments in AI should a clear statement on how the investment will be used to achieve beneficial effects, including aspects of:

  • Robustness and Safety. How can we ensure that the AI systems do what we want without malfunctioning or getting hacked?
  • Prosperity for Society. How can AI systems help us grow economic prosperity while maintaining people’s quality of life and purpose to the best extent possible?
  • Risk Management. How can AI systems remain as unbiased and efficient as possible and don’t infuse unfairness into human structures and systems? How can AI systems explain their actions to ensure that humans can gauge and manage risk appropriately?
  • Societal Acceptability and Alignment. How can we ensure that AI systems are aligned with modern society values that the EU and modern societies that do not oppress their citizens in general agree with?

Research Culture: Fostering co-operation, trust, and transparency among researchers and developers of AI, especially when it comes to sharing lessons regarding safety.

Safety Standards: Accidents involving AI systems should be investigated in a similar manner as aircraft accidents, and the results published openly to ensure that there is collective learning and enhanced accelerated safety of AI systems amongst manufacturers.

Ethics and Values:

  • Safety and Security: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why. AI systems should keep a trustworthy audit log and be able to explain their actions, in a similar manner as aircraft flight data recording (black box) systems.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Explainability: Any AI system that has direct economical or judicial impact on the daily life of humans, has to have minimum explainable AI (XAI) features that is satisfactory to a competent human authority.
  • Manufacturer Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with socially acceptable human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with European and modern society ideals of human dignity, rights, freedoms, and cultural diversity.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data, in line with the provisions of the EU GDPR regulations.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty. European values must be upheld and data sharing with non-EU countries that do not ascribe to these values should be prohibited.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. A human-controlled off switch needs to be incorporated into all advanced AI systems to ensure human control.
  • Non-lethality: Lethal autonomous weapon research should be banned. AI researchers should lead in ensuring non-proliferation of lethal autonomous weapons. Dual use research should not be unduly restricted, but research that is specifically and unambiguously for lethal use should be.

Intelligence: Any discussion on AI should keep in mind that currently we only have narrow AI systems and more powerful general systems (Artificial General Intelligence — AGI) are still way over the horizon.

  • We should avoid strong assumptions regarding upper limits on future AI capabilities.
  • An intelligence test that assigns a simple to understand score (such as 1–10) to an AI system should be used to devise the basis of future decisions and promoted world-wide for acceptance.
  • AI systems should gain more rights yet more responsibilities as it progresses on the intelligence test scale.
  • AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
  • We should avoid defining consciousness and similar concepts as there is no clear consensus and this would open up too many issues to be currently handled appropriately.

No ethical guidelines that can be turned into a concrete, implementable framework can be interpreted without the context of societal values. Whenever possible, globally accepted accepted values should always be the starting point. I have deliberately included references to European culture above on purpose, as I do strongly believe that in case of irreconcilable conflicts (for example, with cultures that use AI to oppress people), an AI with a modern value system will lead to the best outcome for human society — for example, values based on a balanced approach like that used in Nordic countries. It is important to ensure that AI is not used to oppress society but rather help people achieve their true potential. A balanced approach that draws upon the best principles from human society as a whole is important to follow — no value system exists in a vacuum and ethical principles are our best guide forward into the exciting unknown lying ahead.

Read more about my views on AI and the future in an interview with MaltaToday — Brave, new (and slightly scary) world. Follow my Twitter handle AngeloDalli for regular updates.

The views above are entirely personal and are not associated with any official position. As a proponent of AI that should be done the right way, all the AI companies I am involved in will be following the guidelines above to ensure that any resulting AI is built properly from day one.

Ethical Principles photo by Nick Youngson CC BY-SA 3.0 ImageCreator

--

--

Angelo Dalli

Angelo Dalli is a serial entrepreneur and super-angel investor with a tech background and AI expertise - and a part-time marathon runner :-)