Blog: EU’s Road to AI Legislation

INCREASED TRANSPARENCY OF AI SYSTEMS 

Share This post

AI, DEMON OR NOT?

 

The risk vs possibility of AI technology is highly debated. Strong opinions exist all over the spectrum with public profiles saying everything from “with artificial intelligence we’re summoning the demon” to “Artificial Intelligence is the new electricity”. Most people agree that AI needs to be regulated in one way or another. The European Union also agree and have initiated an ambitious effort of proposing harmonized rules on Artificial Intelligence. In 2021, the European Commission submitted its first proposal of the “Artificial Intelligence Act”. Even if there are some steps left before the Act will be ratified by the EU Parliament, the draft clearly shows the direction that AI regulations are heading. It’s expected that the EU AI Act will act as a baseline for further international regulations of AI, in the same way as GDPR has done for data privacy.

 

THE EUROPEAN COMMISSION AI ACT

 

The AI Act aims at a risk-based approach to AI that “imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety”. To achieve this, AI Act classifies risk into four levels: unacceptable risk, high risk, limited risk, and minimal risk.

 

Social scoring and “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm” are examples of unacceptable risks. Artificial intelligence practices falling under this category will not be allowed unless strictly necessary for very limited use cases. 

 

Examples of high-risk AI systems are applications for: credit scoring, recruitment, determining access to education, making individual risk assessments for law enforcement reasons, and safety components for management of road traffic and supply of water, gas heating and electricity. 

 

High-risk AI systems will not be prohibited. On the contrary, all these applications exist in different forms and shapes today and might be handled more consistently and efficiently by AI technology. However, there is a lot of debate among stakeholder what should be counted as high-risk AI, and the jury is still out. The dispute is expected given the regulatory burden that will be put on high-risk AI systems.  

 

INCREASED TRANSPARENCY OF AI SYSTEMS

 

Much of the regulations relate to transparency of the system and the possibility for human oversight to validate the system. A user must be enabled to correctly interpret the system’s output and use it appropriately. For example, a user must “be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system”. Also, for many use cases, a receiver of a decision based on the AI-system output has the right to understand why the decision was made in a particular way. 

 

 

These rules and regulations pose a problem for many AI-technologies existing on the market today. Many AI-technologies, for example neural networks or deep learning, operate like a “black box” for decision making. Typically, such systems produce results without any explanation, which makes detection and mitigation of faulty or inappropriate decisions inherently impossible

 

It’s impossible to validate an output since the inner workings of the black box systems are hidden and not possible to reverseengineer in a meaningful way. It is not possible to understand how inputs are combined to get to an output, and why.

 

Obviously, there is a strong need for strict regulations of high-risk application areas of AI. AI developers should work on AI systems that are transparent and verifiable to gain the trust from the user using them and safeguard the rights of the individuals affected by them.

 

Clear explanations of decisions made by an AI system improve trust, in particular when the output can adversely affect the rights, opportunities, health, safety, social status, etc., of the individual being the target of the output. One can only wish that AI practitioners, to a much greater extent, will focus more efforts on decision auditing, determination of evidence using multiple data perspectives, understanding and using data source authority to produce results with high standards. This makes it possible to deploy many more helpful and usable real-world AI system outside the flood-wave of AI entertainment solutions such as chat-bots and generative art from text descriptions.

More To Explore