The Human Race has set its mind on creating always-on, decision-making machines, that we are calling Artificial Intelligence (AI). Since we don’t really understand what the non-artificial intelligence we are imbued with really is, or how it really works, what we are collectively creating is really more ‘Artificial Ignorance’. Let’s explore why that is.
Before we get into how little we know about our own intelligence, how it works, and consequently how to replicate it as Artificial General Intelligence (AGI), the term used to describe human-like computer intelligence, let’s start with motivation rather than capability.
THE MOTIVATION TO CREATE ARTIFICIAL INTELLIGENCE
Your have probably heard in strategy or somewhere else about the interrogative questions. Maybe as the ‘five-W’s and one-H’ or something similar. The classic exploratory questions, Who, What, Why, Where, When and How. The field of Machine Learning (ML) that is becoming more inaccurately, but better known, as Artificial Intelligence is mostly concerned with the HOW. How do we make machines, or more specifically computer software indistinct from sentience and equally capable of what appears like free-will and independent action and decision making.
There is dramatically less consideration of the WHY and WHAT. Why would we try to replicate humanity in machine form and what does that really look like, not to mention where should it be applicable and who should benefit, control and be impacted by AI. Of course the when just seems to be as soon as possible in almost every case. Why is the when now, well that’s obvious, everyone involved wants to be first, and to bank any advantage ahead of the competition.
What else starts with HOW ahead of the ‘five-W’s’? Basically anything that is process driven rather than strategically or ethically driven. Or to put it another way, anything where when you ask ‘why is this being done’, that answer normally involves a response like, ‘because we can’. When applied to large fields of endeavour, this usually means unexpected and unknown consequences.
In the case of Artificial Intelligence, there is also a certain ‘god complex’ at play, especially when the purpose, impacts and controls are being largely ignored.
THE MISSING KNOWLEDGE
Do you know what intelligence is? Do you think that machine-learning code-writers do, or that the systems that evolve from their work will have a better idea of what intelligence is?
Ask ChatGPT 4 what intelligence is and it concludes with a summary along the lines …
“Intelligence is a complex construct that encompasses various cognitive abilities and enables individuals to effectively understand and navigate the world around them.”
Sure, but how useful is it defining ‘intelligence’ with equally ambiguous terms as ‘cognitive ability’, ‘understand’, and ‘navigate’. On that basis, our artificial (computing world) is already intelligent. Google has me covered on understanding, my car’s navigation has me covered, and computer power, sensor range and analysis capacity far surpasses my cognitive ability. So why am I (arguably) intelligent and yet so far, OpenAI’s ChatGPT might not be considered to be AGI, to be Intelligent in the way we mean but might not say.
What we mean is sentient, self-determined, independent and even some elements of the mystical and imbued with life. Some argue that humans don’t meet these criteria, but most of us believe in some way that we do. We set this benchmark for ‘artificial intelligence’, a benchmark full of things we still do not understand about ourselves. We don’t have scientific understanding or agreement on sentience or life.
It is not clear why single-cell slime, or fungal mycelia show decision-making, spacial awareness and short-term memory. So what ‘intelligence’ does each and every cell of a living organism hold before we even look at centralised networks. This is provided just as a single example of the minefield of truly understanding where in fact understanding and intelligence truly reside.
So we can’t define what ‘intelligence’ is, how it works, where it resides and what in any quantifiable way it encompasses.
As W. Edwards Deming said, “If you can’t measure it, you can’t manage it.” Surely if you can’t measure (or understand) it, you can’t build it? Sort of makes the current field of AI like amateur cooking. Let’s see if we can put enough ingredients together, if we can get an accidental end result. Can we create sentience or our undefined concept of intelligence by combining enough nested <if/then> statements and expansive data sets. The AI version of Frankenstein’s Monster … somehow magically it will come to life.
Let’s hope that it doesn’t, because just as in Mary Shelley’s 1818 novel, we will have no idea what we have created. That’s why I suggested that in some quarters a ‘god complex’ is at play.
SO WHAT SHOULD WE BE DOING
At the very least, let’s move beyond HOW into some clear WHAT, WHY and WHO.
WHO should the progression of machine learning benefit? Given the best AI depends on the best data, is it equitable to have humanity’s total knowledge (or other large-group data) used to benefit singular governments, organization or individuals at the expense of others? Shouldn’t the beneficial result have at least the same equitable distribution as the data source? Is the winner the first past the post, the person who controls the most superior AI, or a higher view of societal, environmental and equitable distribution of benefits.
WHY is it being done? Will it improve the human condition for all, just some, or none – passing the world over to some non-biological future? What are the actual values, aims and change outcomes of each progression. Before the question of ‘HOW’ is solved, the simple risk-benefit analysis on whether the exercise is one of goodness.
WHAT is the most interesting dynamic of all. When you add it to ‘why’, it really leads to the consideration of a completely different why. Why would we try and replicate human intelligence, other than as an experiment – that old ‘because we can’ view and pushing to do it just to see if it can be done. Really the what should be something different to the replication of human intelligence into developments that are situationally superior and more targeted at specific benefits.
After all AGI, the point at which ‘machine intelligence’ is equal to ‘human intelligence’ is just a blink-of-an-eye in the timeline of the universe. That split second of equilibrium that will be immediately passed as machine intelligence surpasses human intelligence. A singularity if you will, borrowing from the other AI concept of the singularity. A point at which AI will forever surpase human intelligence. After all, the rate of technical evolution is much steeper and faster than any biological comparisons.
So let’s focus on the WHAT, WHY, WHO benefits and WHEN, so that the accidental ‘Frankenstein’ smarter AI (when it comes) is at benevolent and well conceived – at least at the point of origin.
The fact that we are not doing this. That we are focused primarily on HOW to do it, thinking that somehow the rest can come later … is why we are headed for ARTIFICIAL IGNORANCE, humanity’s ignorance at least. Perhaps the accidental algorithms will work it out after their god (or at least parent … us) has left their universe.
Rather than some un-entertaining dystopian piece, this article is a call for focus on vision, deeper understanding, principles, equity, transparency and due consideration of what is being done and why.
This shouldn’t be a digital wild west with its accompanying land-grab. Though of course it is – so at least understand your own stake in the AI race.
David Warwick-Smith
Consultant
23 June 2023

Leave a reply to Jack C Crawford Cancel reply