Europe’s Proposed AI Legislation

The Artificial Intelligence (AI) Act

On 21 April 2021, the European Commission released a proposal for regulation of the European Parliament in respect of Artificial Intelligence (AI). The proposal is for an Artificial Intelligence Act, a compliance framework, and the set-up of a governing European Artificial Intelligence Board (EAIB).

This article is a summary of the proposal, an exploration of the approach, and a commentary on some of the apparent strengths and shortcomings of the Act. Including exploration of the international political environment, and the broader impact that the EU’s AI legislation (the ‘Act’) may have.


“AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.”
— EU Proposal (page 1)


Definition of Artificial Intelligence (AI)

DEFINED BY THE ACT

Artificial Intelligence (AI) as defined within Annex I of the EU Proposal:

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, search and optimization methods.


“Intelligence without ambition is a bird without wings.”
— Salvador Dali (1904 – 1989)


THE DEFINITION PROBLEM

Definitions of Artificial Intelligence (AI) always seem to have their shortcomings, attempting to bridge public perceptions, computer-science, biological-science and even science fiction, in a linguistic space where perspective and bias are already strongly at play.

At its simplest, the ‘artificial’ just means man-made or machine-made, as in, not organically or naturally created. ‘Intelligence’ is the more problematic word, again at its simplest, the ability to acquire and apply knowledge (data) and skills (process). That, of course, would make any device with a computational processor and data storage ‘intelligent’.

In the two-word combined definition of ‘Artificial Intelligence’, ‘Intelligence’ becomes a largely useless word, as the combination would include every piece of active software applying a process to transform data into a resulting outcome. Since the result is such a broad definition, it does not serve contemporary and practical delineation of endeavour, or meaningful discussion or debate.

Most conversations about ‘Artificial Intelligence’ instead change the working meaning of ‘intelligent’ to mean something closer to ‘human-like’ in conception. This has the result of confusing ‘intelligence’ with concepts such as consciousness, morality, personality, self-awareness and remaining undiscovered or still poorly understood areas of neural-science and life itself.

No wonder ‘Artificial Intelligence’ is such a trigger for divergent views and debate. In some attempt to further clarify and make new definitions, other terms have appeared. Artificial General Intelligence (AGI) in respect of creating machine intelligence that approximates humans, and Artificial Superior Intelligence (ASI) for areas of intelligence beyond the capabilities of human beings. The problem often encountered, is that there is no singular type of intelligence, or suitable direct comparison unless it is outcome, purpose or result based.

WHAT’S WRONG WITH THE ACT’S DEFINITION

The European Commission’s proposal contains the definition reproduced above. It is an entirely ‘process-based’ definition, describing which forms of data handling are to be considered ‘artificial intelligence’. This presents two immediate gaps:

Firstly, most of the remainder of the proposed AI Legislation, especially the eight high-risk AI categories contained in the Title III group, which forms the bulk of the legislation’s focus, are ‘fields-of-endeavour’. They are areas of application, mapped to contemporary industries and sectors, rather than processes. This creates an immediate problem with the definition being process-based and the application of the legislation being outcome or sector based.

Secondly, and perhaps more importantly, a process-based definition of ‘artificial intelligence’, does not compare to our general understanding of ‘human intelligence’. For the most part, we understand ‘human intelligence’ from observable outputs, what we might consider a ‘results-based’ understanding of intelligence, rather than a process-based one.

There is still so much that we don’t know about the processes of human intelligence, or even other biological intelligence for that matter. In effect, the EU Proposal is mapping our relatively poor understanding of the process of human intelligence to the building of artificial intelligence, and leaving the observable outcomes and results out of the definition, and as a result largely out of the legislation.

The Societal Risk of Artificial Intelligence

Why is the European Union looking to legislate AI specifically?

The reasons given for the proposed AI legislation, are the “fast evolving family of technologies”, their “high impact”, and their potential to “bring about new risks or negative consequences for individuals or the society” (page 1 of EU proposal).

Specifically, the major new and emergent risks relate to the opacity, complexity, bias, unpredictability, and autonomy that can be found or potentially found in AI technology (page 2 of EU proposal).

The resulting aims are safety, legal compliance, investment certainty, trustworthiness, prevention of market fragmentation, and rights protection (paraphrased from page 3 of the EU proposal).

The proposal goes on to categorize AI ‘fields-of-endeavor’ by proportionate risk into three groups: Prohibited (banned) areas; High-risk; and the remaining low-to-medium-risk ‘catch-all’ group. These are summarized in the next section on ‘The Act’s AI Risk Categorization’.

Looking beyond the stated ‘reasons for the proposal’, why is the European Union looking to create specific legislation in respect of Artificial Intelligence (AI) as a discrete act?


“It is not worth an intelligent man’s time to be in the majority. By definition, there are already enough people to do that.”
— Godfrey Harold Hardy (1877 – 1947)


One obvious meta-reason, is that accelerating change, specifically technological change, is often accompanied by equally profound social, cultural, economic, and political change. The bronze, iron, steam, industrial and information ages are just a few grand-scale examples.

It is also generally accepted that the rate of technological change is increasing, and the ‘collective technology set’ that falls under the broad swathe of artificial intelligence (AI), would appear to be one of the most likely candidates for significant and long-lasting disruptive technology. Artificial Intelligence will assuredly drive significant social, cultural, economic and political change.

We will look at specific political considerations, including speed of development along with sovereignty differences, in the section on ‘Other Jurisdictions’ later in this article. For now, let’s consider the risk-categorizations detailed in the Act, and the practical application issues that these present.

The Act’s AI Risk Categorization

The categorizations of Artificial Intelligence (AI) risk contained within the EU proposal:

Prohibited AI Practices (Title II)

The following four (paraphrased) Artificial Intelligence (AI) practices would essentially be made illegal and would be banned under the Act, with some specific exemptions for law enforcement, military and anti-terrorism activity:

  1. Subliminal techniques (materially distorting a person’s behaviour).
  2. Exploiting vulnerabilities of specific groups (especially disadvantaged groups).
  3. Public authority analysis of trustworthiness based on social behaviour.
  4. Real-time bio-metric identification in public (some exemptions for law-enforcement).

Since these are ‘prohibited’ and relatively discrete, the proposal does not spend significant consideration beyond their prohibition. The bulk of the proposal is devoted to the next category of perceived ‘high-risk’ AI endeavors.

High Risk AI Systems (Title III)

There are eight (8) fields-of-activity that are identified in the High-Risk Artificial Intelligence (Title III category). The ‘high-risk’ grouping is the most content-rich and obligation-heavy section of the proposed Act. In summary, the eight areas are:

  1. Bio-metric identification and categorization of natural persons.
  2. Management of critical infrastructure (safety components for traffic and utilities).
  3. Education (only in respect of determining access and assessment).
  4. Employment (only recruitment, selection, promotion and performance evaluation).
  5. Public and Essential Services (creditworthiness, access to welfare and first-response).
  6. Law Enforcement (only profiling, polygraphs, evidence review – all with exceptions).
  7. Migration and Border Control (as per law enforcement with certain exceptions).
  8. Justice and Democracy (interpreting/researching of facts and legal application).

The somewhat strange intersection of the ‘process-based’ definition of AI in the act, discussed earlier, and this eight industry or fields-of-endeavor ‘high-risk’ categorization, is that some commonly considered ‘non-AI’ software processes will fall into this ‘high-risk’ group. For example, the already existing statistical analysis and candidate processing algorithms contained in job-boards and social platforms such as LinkedIn, these would most likely fall under the compliance obligations of the Act.

Even more strangely, a vast array of what practitioners and commentators and even legislators may consider ‘higher-order’ artificial intelligence falls outside of this ‘high risk’ category, despite their almost obvious potential for misuse, disruption, personal and societal risk.

Let’s look at just some of the areas that are missing from this ‘high-risk’ categorization and fall into the much vaguer and far less onerous ‘low-to-medium-risk’ category.

Missing from the High-Risk (Title III) AI Category

Here are just some of the areas that artificial intelligence (AI) is already influencing, that are NOT considered ‘high risk’ under the proposed EU Act:

Health, biotech (other than bio-metric), wearables, psycho-analytics, psycho-metrics, socio-analytics, socio-metrics, human-brain-interface (HBI), brain-machine-interface (BMI), transportation (other than transport safety), drones and other pilot-less transport, fin-tech, business technology, HR-tech (other than recruitment, workplace review and education), science (all aspects), arts (creation, rights, and duplication), intellectual pursuits and content creation and authorship, nano-technology, space exploration and exploitation, machine-to-machine interfaces and computing, internet-of-things (IoT), networks, social-media, entertainment, broadcasting, telecommunications, communication more broadly (including neural linguistic processing – NLP), translation, encryption, new data-exchange models, data-lakes and data-warehousing, gaming (including immersive, augmented and virtual reality and simulations), sexual services and devices, inter-personal services and professional services, augmentation, animal sciences (including replication, augmentation and reproduction), environmental sciences, digital-twins and state-modelling (in almost any field), and perhaps most surprisingly of all artificial-life, artificial-consciousness, robotics, weapons, spy-wear and other purposefully destructive technology (other than the Act’s short list of policing, border control, anti-terrorism and bio-metric assessment).


“Intelligence is what you use when you don’t know what to do: when neither innateness nor learning has prepared you for the particular situation.”
— Jean Piaget (1896 – 1980)


The ‘not-included’ in the high-risk group, is an extensive list of potentially transformational and disruptive artificial intelligence (AI), that of course also leaves out unknown emerging (future) fields. It is not surprising that many commentators have said that the proposal does not go far enough and misses the reality of the impact of AI and is naïve in its conception and coverage.

Low-Medium Risk AI

Is a catch-all for other AI providers and users in all other areas that will also have lesser record-keeping and compliance requirements under the proposed Act.

It would theoretically include those described as ‘missing’ under the ‘high-risk’ group as discussed above. Although many would not fall under the Artificial Intelligence (AI) definition given in the act, even though most people would consider them part of the AI revolution.

An example is Elon Musk’s Neural-Link, although the software might fall under the Act, the actual human-brain-interface (HBI), the implanted electrodes into a human brain, would not fall under the definition of the act, or a high-risk category, and as a result completely escape the legislation’s effect. The same is true of many components of AI, for example the hardware of robotics and the likely products and outputs of many artificial intelligence applications.

SUMMARIZING THE CATEGORIZATION PROBLEM

The criticism of the attempt to future-proof the act with a ‘field-of-endeavour’ categorization, is that it both brings in some existing technology that is hardly artificial intelligence, and more concerning, it leaves out so many of the real places that artificial intelligence (AI) is going to have truly transformational impact, areas with the associated societal risks that the Act is purporting to address.

In some cases, the EU will be looking to cover these with other legislation – medical, traditional, and other standing Acts. If that is the case, then why have a specific ‘Artificial Intelligence Act’ at all? This may again point more towards political signaling than efficient governance, something that we will explore under the ‘Other Jurisdictions’ and ‘Concluding Comments’ sections later in this article.

The Act’s Compliance Requirements

The proposed AI Act includes a large number of sections on how compliance with the Act will work, what the reporting and documentation requirements are likely to be, and how the governing body (the European Artificial Intelligence Board) will function. There are also some early estimates on the quantum of the likely compliance costs the will be faced by those impacted by the Act.


“A major lesson in risk management is that a ‘receding sea’ is not a lucky offer of an extra piece of free beach, but a warning sign of an upcoming tsunami.”
— Jos Berkemeijer


BRIEF SUMMARY OF COMPLIANCE REQUIREMENTS

The basic compliance requirement, is submitting and maintaining a general description and a structured response containing other substantive details of the AI system as follows:

This includes AI methodologies, design specification, AI and algorithmic logic, components and integrations, data requirements, labeling procedures, data cleaning methodologies, human oversight measures, validation and testing, discriminatory impacts, risk management, lifecycle changes, declaration of conformity, post-market monitoring plan, as well as other related details both initially and ongoing.

The exact form of this submission is not detailed at this stage beyond the required sections, however the requirement will be substantive in the case of ‘high-risk’ category technologies. It is also unclear as to what will form non-compliance beyond simply the failure to complete the information submissions.

Compliance at this stage of the proposed legislation is primarily a reporting activity.

The proposal has substantial detail on what information is required, but little on what will happen to it, what non-compliance means, merits-based review, what happens when high-risk activities create circumstances where they “live up to the high-risk status” by creating negative outcomes, and what audit, review, certification or other quality controls will take place.

So the Act broadly defines AI around the algorithmic processes involved, then categorizes fields-of-activity into risk groups, and then outlines in broad terms what ongoing information will need to be provided.

In a practical sense, the Act creates a ‘Venn-diagram’ where the ‘intersection’ is those covered by both the AI-definition, and the industry-based (activity) categorization. It is to those who fall into the ‘intersection’ that the compliance requirements of the Act are addressed.


“Getting up and drawing a Venn diagram is a great way to appear smart. It doesn’t matter if your Venn diagram is wildly inaccurate, in fact, the more inaccurate the better.”
— Sarah Cooper (1977 – )


The Act is vague on whether this ‘intersection’ applies to developers, deployers, or users, or all participants in the operational chain. It appears, at least in this early form, to apply in all cases, or at least with those operating AI algorithmic processing. It could be read as giving the producers (developers) of systems an ‘out’ if they are not applying their systems to real (active) data sets.

It is also unclear on what happens to the collected information. Most specifically, what if the ‘high-risk’ endeavour is engaged in exactly the type of activity and malevolent outcomes, that the legislation is designed to protect Europe and European Citizens from? There is no real coverage of what is ‘safe’ and benevolent ‘high-risk’ activity and what is ‘un-safe’ and malevolent activity.

The legislation seems incapable of achieving its objectives, as there is no consequence on performance or merits, beyond the simple act of information provision and administrative compliance.

No doubt this will change, however for now there is no real closure on how the process will work, or what the powers and obligations of the European Artificial Intelligence Board (EAIB) will be. No clarity on how the legislation will lead to completing the task of making AI “good for society” and “increasing of human well-being.”


“Concern for man and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.”
— Albert Einstein (1879 – 1955)


Other Jurisdictions

Why is the European Commission proposing a specific Artificial Intelligence (AI) Act?

There is of course the stated reason, of protecting European society and citizens from an area of rapid technological change. There is also of course the unstated reason, the impact of the global political environment.

Europe is not the only legal or sovereign jurisdiction that is positioning itself in respect of technological advancement, governance, and undertaking the corresponding political signaling.

Despite the proposed Act being short on detail on subjects such as the operational consequences of non-compliance (see earlier discussion on The Act’s Compliance Requirements), it is rich in detail on highly specific subjects like “real-time biometric identification in public.”

Why would an incomplete legislative framework be so detailed on one very specific area of Artificial Intelligence? The simple answer is China and the USA, as well as direct relevance to some US domiciled businesses, that are impacting Europe with their AI technology, businesses such as Clearview AI.


“In today’s world everything is political. We are a statement – our clothes, haircut, the way we act.”
— Olga Tokarczuk (1962 – )


Public Surveillance of Citizens

Clearview AI, is a US-listed company that provides exactly the type of service described in the proposed EU legislation … “real-time biometric identification in public” … Clearview has amassed some three-billion (yes 3,000,000,000) facial photos in its database.

Social media and software businesses such as Facebook, Twitter and Microsoft have complained about Clearview’s practices that include ‘scraping’ (collecting) images from social media and the internet at large. A number of US state jurisdictions have moved to ban the use of Clearview, and most relevant to the proposed EU legislation, a large number of complaints have been filed with data protection regulators in France, Austria, Italy, Greece and the United Kingdom. No doubt a big part of the background legal and political environment, that existed as the legislative proposal was being drafted.

Clearview AI themselves have put an end to most of their private company software licensing, however they provide their services to an ever-increasing list of law-enforcement and state-based agencies in the USA, Europe, Australia and other jurisdictions.

In Australia, police agencies denied they were using Clearview AI, until a list of customers was stolen and disseminated and included the Australian Federal Police as well as the Queensland, Victorian and South Australian state police forces. There is no legislation in Australia that protects its Citizens from any form of state-based public bio-metric identification. It is just not spoken about by political and state agencies and for the most part, the media isn’t interested in chasing down their lack of transparency.

In China, there is no attempt to hide state-based citizen surveillance. The ‘shèhuì xìnyòng tǐxì’ ( 社会信用体系 ) or what has been called in the US, the Chinese Social Credit System, is in effect a national blacklist being developed to assess the economic and social (political) reputation of China’s citizens and businesses. No doubt expanding beyond China’s borders to include external businesses and selected international citizens. This is coupled with the largest array of CCTV cameras in the world, estimated at more than 200-million cameras surveilling the Chinese population.


“The masses never revolt of their own accord, and they never revolt merely because they are oppressed, Indeed, so long as they are not permitted to have standards of comparison, they never even become aware that they are oppressed.”
From 1984 — George Orwell
(Eric Arthur Blair, 1903 – 1950)


The proposed European AI Act has a lot to say, indirectly, on what is happening in China, in the USA, in Australia and in other jurisdictions. It has a lot to say, indirectly, on legislative and cultural frameworks that allow businesses like Clearview AI to operate on very shaky ethical grounds.

The proposed European AI Act, if it does nothing else comprehensively at this early stage, would make “real-time biometric identification in public” prohibited in Europe, with some specific anti-terrorism and major-crime investigation exceptions.

It would certainly make the Clearview AI software illegal for corporations and individuals operating in the European Union to use, and it may provide a comparison for citizens in other jurisdictions, as per the George Orwell quote above, to demonstrate how a more concerned administration may treat its individual citizens.

Misused AI certainly has the potential to make the dystopian reality of ‘Big Brother’ in George Orwell’s famous novel, 1984, an even more disturbing reality in an AI-world, only a generation (40 or so years) later than his prescient novel forebode.

Mass bio-metric identification by the state is perhaps one of the first large-scale demonstrations of the risks of uncontrolled progression of Artificial Intelligence (AI) technology, and a reason why these early efforts of governance from the European Union should be supported, debated, understood and progressed within a human-centric frame of reference.

As the proposal states, “AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.”

Concluding Comments

Many legal jurisdictions adapt existing fields of law to cover changing social and technical landscapes. It is often with great reluctance that entirely new fields of law emerge. A significant example being corporate law, as the industrial revolution changed the world in such a new and dramatic way. Companies were given special status in law, in economics, in society and in political considerations.

The information revolution has a similar transformative dynamic, and artificial intelligence (AI) in particular could change elements of autonomy, agency, equity, control, ownership and accountability. It is realistic to consider circumstances where autonomous self-learning and artificial intelligence could come to be seen as having separate legal standing, and create new legal entities, in a similar way to the origins of corporations as separate legal entities.

In addition to the protection of citizens from run-away developments in technology such as public bio-metric identification, legislation such as the proposed EU Artificial Intelligence (AI) Act may be the beginning of an entirely new field of law.


“The safety of the people shall be the highest law”
– Marcus Tullius Cicero (106BC – 43BC)


Regardless of legal change, the impact of Artificial Intelligence (AI) is undoubtedly going to be profound. The launch of the European Commission’s proposal on 21 April 2021 is just the start of what we will see over the coming years and decades ahead.

Within a couple of weeks following the proposal, the United States of America announced its National Artificial Intelligence Initiative, for overseeing and implementing the United States national AI strategy. A portal for all the public facing aspects of the US’s AI information located at http://ai.gov/ following on from the US National AI Initiative Act of 2020 which became US law on 1 January,a 2021.

In Australia, in May 2021, the Australian Government announced a budget of $124-million (AUD) to be set aside to enhance AI capability, including the development of a new National Artificial Intelligence Centre.

China was one of the first-movers, announcing in 2017 their New Generation Artificial Intelligence Development Plan, which outlined China’s plan to be the world leader in Artificial Intelligence by 2030, targeting a trillion yuan ($150 billion USD) per year as an economic objective.

Political, economic and social signaling is happening in every jurisdiction, and now legal frameworks are beginning to appear. The EU proposal is just the start of the law playing catch-up with a major societal transformation that could happen faster than any that have preceded it.


Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
– Stephen Hawking (1942 – 2018)


Now is the time to ensure that the primarily economic ‘race’ to discovering, building and deploying the most powerful tool-set that humans have ever built. Artificially Intelligent Tools, that could be capable of creating their own evolution and technical progression, is a road-ahead that is well managed and that benefits humanity.

There may be many issues with the EU’s proposed AI legislation, however the aim to be a “force for good” and increase “human well-being”, is to be applauded. It is missing from the plans of so many other nations, corporations and individuals who are writing our collective future.

David Warwick
AI Ethicist | Technology Consultant
30 May 2021

Advertisement

3 responses to “Europe’s Proposed AI Legislation”

  1. MartinoTentino Avatar
    MartinoTentino

    A great essay. Very clearly bringing focus to the Act and the minefield we are stepping into. Good quotes too. Unfortunately for the Act this is being approached from the wrong direction – bottom up insted of top down. Endless lists and catogories invite companies to find ways to avoid or escape. Top down is driven by philosophy and ethics. Why does this Act exist in the first place? What are we worried about? The Act more importantly needs to consider an ethical framework. Certainly, the military machine will develop AI to kill enemies (and people that look like enemies). What is is about the Chinese surveillance state that makes our blood run cold? Those top down guidelines need to be cannonised. Then the categories and exceptions ( of which there will be many). Without a guiding ligjt we will lose sight of what we need to guard against. Do no evil! LOL.

    Like

    1. MartinoTentino Avatar
      MartinoTentino

      Then the categories and exceptions ( of which there will be many) can be considered against that baseline. Without a guiding light we will lose sight of what we need to guard against. Do no evil! LOL.

      Like

  2. […] In April 2021, the European Commission unveiled its proposed legal framework on AI. The proposed ‘Artificial Intelligence Act’ will ban certain AI practices and impose strict compliance on others. It is clear ethics signaling that Europe will walk a different line to China and the USA. I wrote a review of the proposed legislation here. […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: