Absolute AI corrupts absolutely.
The morality resident in technology decreases as its power grows.
The exponential rate of technology change, that the basket of superior decision-making colloquially labelled as Artificial Intelligence (AI) represents, will deliver unprecedented power. This power will not be equitably or universally held, it will create power-centers and power-vacuums that will change the very fabric of our reality and our humanity.
As Jaan Tallinn said, “giving control over powerful AI to the highest bidder is unlikely to lead to the best world we can imagine.” The forces at play with information centralization are not dissimilar to those that lead to corporate or state monopolization.
Information monopolies will become far more powerful than any form of power monopoly we have seen before, in fact they already are.
“Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men…”
Lord Acton (1834 – 1902)
On April 5, 1887, Lord Acton sent a letter to Bishop Creighton, that included the famous quote, “Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men.” This observation, that a person’s sense of morality lessens as his or her power increases, is often born out as an observable truth. In fact ethics, when boiled down to a fundamental core, is the exercise of power that is in the common interest, ahead of the prioritization of individual interest.
How do you imbue AI with a suitable sense of common human interest? What is Ethical AI?
Algorithms are not structured to subjugate their power to complex external considerations of common interest. Technology is applied for singular purpose.
It is easy to anticipate, that without external intervention, the ‘AI that wins’ will be centralized and most likely amoral. It will be vested with innate structural biases and prejudices, it will serve the purposes of its creators or itself. Artificial Intelligence will rally the forces of monopolization faster and more comprehensively then ever before, without any human misgivings or moral considerations.
We already see information monopolies forming, where once economists spoke about ‘Geographic Monopolies’, we can now see ‘Data Domain Monopolies’ forming. The forces are the same, just operating at unprecedented speed and scale, characterized by ownership of a key resource (information), barriers to entry, and in many cases state-sanctioned and legal market protections.
More true than Lord Acton’s observation on corruption, absolute power imbued by superior AI will almost certainly be corrupted absolutely – unless you control it, own it, and deploy it. If you do, then you hold dominion, the power to be corrupted absolutely.
There are no stone-age men, their are no bronze or iron-age settlements, there are almost no early industrial communities, and there will be no early information-age societies, in an era of ubiquitous and omniscient artificial intelligence.
The paradox of AI is that absolute power leads to absolute centralization.
There is only one winner, one model, one place, where all data rests and the power of superior decision making is greater than all other competitors, in an amoral algorithmic singularity.
The AI Arms Race has already started.
We can see the beginning of this with the technology leaders of our time. There isn’t an increasing amount of competition to Amazon, Google, Apple, Facebook and their ilk, their power is growing and centralizing. These technology leaders are more powerful than most Governments. At least in so far as these things are separate to each other.
Governments are realizing that they are in a race as well.
In July 2017, China outlined their intent to be the world leading AI power by 2030. Their ‘New Generation Artificial Intelligence Development Plan’ outlines the quantum of the market and their desire to become the ‘AI Superpower’ and the driving force in defining ethical norms and AI standards.
In April 2021, the European Commission unveiled its proposed legal framework on AI. The proposed ‘Artificial Intelligence Act’ will ban certain AI practices and impose strict compliance on others. It is clear ‘ethics signaling’ that Europe will walk a different line from China and the USA. I wrote a review of the proposed legislation here if you want more detail.
Shortly after the European announcement, the USA launched a new website – www.ai.gov – a home for their ‘National AI Initiative’, with the aim of “ensuring continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society.”
In Australia, the plan is to invest AU $124 million as a “global leader in developing and adopting trusted, secure and responsible AI”, hardly a drop in the AI-ocean compared to China’s US $150 billion ambitions.
“The control of information is something the elite always does, particularly in a despotic form of government. Information, knowledge, is power. If you can control information, you can control people.”
Tom Clancy (1947 – 2013)
So where is the debate, effort and conversation around AI ethics?
The pendulum of accountability is sadly retrospective. Pick your own analogy, tyrannical kings, abusive clergy, despotic rulers, genocidal regimes or corporate failures, the change for good came after the practical and usually painful, illustration of the bad. Take Enron, as John Collison said, “You’re used to seeing values listed on waiting-room walls. Communication, integrity, excellence, and respect. Those were actually Enron’s values.” It doesn’t matter what is said internally, by the king, the leader, the priest, the company, their focus is on their core ambitions. This focus drowns out external ethical considerations, and the level of this corruption, whether for good or ill, is a function of the level of power they have to see it through, at any cost, and against any objection.
Internally accountable ethics is no ethics at all.
In the case of Artificial Intelligence (AI), there is one fundamental difference. Where it is often said that ‘dehumanizing others’ allows for rationalization of bad behaviour, in every case, it is ‘dehumanization by a human’. In the case of AI it is not.
There is no innate humanity to fall back on.
It isn’t simply the abstraction layers of a corporation, bureaucracy or government either. In the case of AI, at some future point, it will be dehumanization by a self-learning technology that has no human base for reflection, for moderation, for context or for mercy.
This is the case where retrospective accountability may actually come too late.
“You don’t want another Enron? Here’s your law: If a company, can’t explain, in one sentence, what it does… it’s illegal.”
Lewis Black (1948 – )
So why isn’t AI Ethics more prevalent and more powerful?
The answer is simple if you use any power analogy. The participants, whether you see this as a race, or as a war, have vested interests and power they wish to protect and enlarge. They are concerned, in a time of rapid change, with their own power disruption or dissipation. This also means the participants know that they can’t sit still. Everyone in the game knows that the race is on.
That means single minded pursuit of the objectives of each participant and a failure to consider broader societal consequences.
For the winner(s), the unprecedented level of information control, decision-making advantage, and power that will come from being an ‘AI Superpower’ politically, economically, environmentally or biologically is a holy grail. One that embodies the phrase “absolute power corrupts absolutely”.
The AI Paradox is a singularity of power.
There can be only one, and there is no humanity in a singularity.
Humanity is by definition about community.
AI Ethicist | Technology Consultant
9 August 2021