July 6, 2023

Bringing Law and Order to AI

The European Union again made headlines a couple of weeks ago when its MEPs approved a modified version of the EU AI (Artificial Intelligence) Act, originally tabled by the commission.  It is hoping to get a final draft of the act by the end of this year.  This is notably the first regulation on artificial intelligence.

Why the new focus on regulating AI?

Although the world has had an increasing interest in AI for many years, recent months have brought a fresh surge of discussions stemming primarily from recent improvements in technology that have improved the engines behind popular sites such as Chat GPT, Google Bard, and Bing Chat.  Microsoft has also fueled the discussion with its CoPilot introductions.  As exciting and promising as AI portends to be, and as many possible benefits as it can offer us – we cannot forget to ask the question “Yes we can, but should we?”

The EU has lead the way on regulating the answer to this question.  In April of 2021, they proposed the first regulatory framework for AI.  It proposed an analysis and classification of AI systems according to the risk they pose to users.  The different risk levels mean more or less regulation.  The main goals appear to be to make sure the AI systems used are safe, transparent, non-discriminatory, and even environmentally friendly!

What about the rest of the world?

Tech regulation has a history of lagging behind the industry itself.  AI will be no exception.  Other countries like the US are far behind the EU in developing regulations.  Recently several US states and municipalities have passed or introduced AI bills.  The federal government is having hearings and forums to discuss possible AI regulation.  The prioritization for now seems to be trying to answer the question of what should be regulated and how.  The US did release a Blueprint for an AI Bill of Rights last October.  This outlines five principles which should guide design, use, and deployment of regulation.  There are several other preliminary documents and task forces in the works as well.  The most recent is a bill introduced last week to create a commission to focus on the regulation of AI.

The US commission, if created, will be tasked with considering how AI regulation might mitigate risks and harms of AI as well how it might protect the US’ leadership in AI innovation and the opportunities that may bring.  This would involve a balancing act which would acknowledge the importance of addressing potential drawbacks while still harnessing the power of AI for social and economic benefits.  It will also consider how and who would oversee AI regulation.

In the meantime, the US is likely to see a host of state-wide discrepancies in AI related regulations.  There are already some in place, for example around the testing of autonomous vehicles (allowed in AZ, CA, & MI for instance but not in some other states).  New York city is putting into effect in July a new regulation regarding the use of AI in hiring and promotion decisions.  Many of these preliminary regulations, as well as the success or problems the EU experiences in the upcoming months, will likely influence future national regulations for the US and other countries.  Some are already criticizing the EU act as too vague and hard to enforce, for example.

What are the classifications in the EU AI Act?

The classifications they propose are:

  • Unacceptable Risk
  • High Risk
  • Generative AI
  • Limited Risk
  • No Risk

What types of AI would have an unacceptable risk?

The EU suggests that these types of AI be banned (source https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence)

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition

They do allow for some exceptions, for example with court approval, biometric identification could be used to prosecute serious crimes.

High Risk

AI classified as high risk would be assessed before being put on the market and throughout the lifecycle of the product.  This includes systems used in toys, aviation, cars, medical devices, and lifts as well as those in a set of 8 categories relating to biometric identification, essential services including education, employment & public services, and law interpretation.

Generative AI

Generative AI, like ChatGPT or Google Bard, would have to comply with transparency requirements.  These include

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Limited Risk

The Limited Risk category AI would still require minimal transparency that would allow users to make informed decisions and decide whether they want to continue using the application.  At all times the user should be made aware when they are interacting with AI. This includes systems that generate or manipulate image, audio or video content.

No Risk

Finally, there is some AI that will be considered no risk, such as those used in video games or spam filters.  Apparently the vast majority of AI systems actually fall into this category.

What prompts some of these regulations and are they really necessary? 

One reason for requiring AI services to register the sources of all data used to train it, is copyright infringement.  Other reasons include putting safety constraints in to help counter “automation bias”.  Some feel that AI regulation is not really necessary because misuse of AI is governed by other regulations.  For example, if a mortgage company uses an AI algorithm to evaluate loan applications, and this leads to racially discriminatory loan decisions, that violates the Fair Housing Act.  If AI software in an automobile causes an accident, this could fall under the products liability law.

What is “Automation Bias”?

Automation bias is the tendency to let your guard down when machines are performing a task.  For example, a pilot who reduces their vigilance when their aircraft is flying on autopilot.  This stems from the belief that machines are accurate, objective, unbiased, infallible etc.  Research has also shown that when machines show signs of human behavior like using conversational language, people tend to apply social rules of interaction like politeness.  This increases people’s tendency to trust the machines.

Are AI companies trying to address AI concerns?

Several AI companies are trying to address some of the concerns with AI.  For example, Amazon is experimenting with a fairness metric called “conditional demographic disparity”.  These companies face various hurdles, however.  For example, there is no agreed-on definition of fairness.  A few leaders of AI companies are even getting involved in regulation discussions around the world trying to help provide information to lawmakers and hopefully are taking back feedback in the nature of concerns that will drive future efforts in this field.

What can MY company do with regards to adopting AI safely?

In an article published in the Harvard Business Review, it is suggested that you as a company leader should explore the following four factors when deciding whether to use AI in your business:

  • The impact of the outcome – if lives or livelihoods are at stake, best to either not use or make sure that AI is subordinate to human judgement.
  • The nature and scope of decisions – if the task is mechanical or bounded, software may be at least as trustworthy as a human.  If the task is subjective or the variables change, human judgement should be trusted more.
  • Operational complexity and limits to scale – does the data show bias in some markets because it comes from all markets or does it adjust to markets?
  • Compliance and governance capabilities – do you have the personnel and systems in place to be able to prove compliance and adherence to regulations?

As a leader you need to make sure your company is adopting AI safely.  If you need advice or direction, reach out to Barnes Business Solutions, Inc. for assistance!

Facebook
Twitter
LinkedIn
Pinterest