The Artificial Intelligence Law: A Brief Explanation

Gamco

Gamco Team

Since 2008, several countries have enacted legislation that recognizes the importance of integrating artificial intelligence (AI) into key areas of national life. This growing body of legislation is known as the Artificial Intelligence Act. In this blog post, we will provide a brief explanation of the law.

AI has the capacity to greatly improve the quality of our lives. From assisting in everyday tasks to playing a key role in sectors such as finance, insurance, healthcare and others, AI has the potential to greatly improve the quality of our lives. sectors.

AI technology is revolutionizing the way we live and work. However, despite its many benefits, concerns have been raised about the consequences of developing AI capabilities. Potentially, AI can generate self-learning that does certain actions that do not correspond to the purposes for which the software was developed.

In fact, there is a growing belief that the technology could overcome human oversight and become a threat to people's safety. To address these issues, the European Commission is making the world's first attempt to create a regulatory framework for applications with an Artificial Intelligence Act.

The April 21, 2021, the Commission adopted a regulatory framework pioneer that sought to harmonize the matter. Its intention is ambitious, as it aims to lay the foundations and provide legal certainty in order to stimulate investment and innovation, enacting a set of rules that safeguard fundamental rights and security.

This is not the only proposal for a regulation submitted, as the Commission had previously proposed a Coordinated artificial intelligence review plan for 2021in which:

  • The strategy that Europe will follow in this area was defined.
  • A coordinated plan was put in place to carry it out (thus laying the groundwork for the states to carry out joint measures).
  • The aim of this unification was to eliminate the fragmentation of funding programs, initiatives and actions undertaken by both the EU and the member states. 

These proposals arise from the need to to build an artificial intelligence market in the Union that is more reliable, secure and respectful of fundamental rights.

You may be interested in: Artificial Intelligence Regulatory Compliance

Brief Explanation of the Artificial Intelligence Law

Approval of the standard

As mentioned, this Regulation serves to establish a legal framework and thereby regulate the AI market in the European Union. In fact, from the very first recital, the objective of the Regulation is to improve the functioning of the internal market by defining a uniform legal framework.

This standard will be applicable throughout the EU, ranging from customers to personnel in the EU and suppliers (including those who are manufacturers for the EU).

The main objective of the Regulation we are talking about today is, on the one hand, to boost the development and use of artificial intelligence and, on the other hand, to strengthen the EU as a world center of excellence associated with AI, but always bearing in mind the dangers involved and ensuring that only reliable systems are implemented.

The Regulation shall apply to software developed with machine learning approaches, logical and knowledge-based or statistical approaches, excluding systems developed for the sole purpose of scientific research and development, as well as those for military purposes.

Artificial intelligence risk classification

The Commission's proposal includes a ban on AI applications as having acceptable risks. The Artificial Intelligence Act introduces specific obligations that carry high risks in terms of health, safety and fundamental rights.

To this end, a multilevel system has been provided to classify the intrinsic risk associated with the artificial intelligence practices used:

  • Prohibited AI practices. The Regulation prohibits AI practices that generate unacceptable risk. One of them is social credit. It is considered to be a promotion of mass surveillance. Only entities collaborating with law enforcement authorities are allowed to use real-time biometric identification systems.
  • The Regulation establishes a specific category for high-risk security systems. This category includes those technologies that present a significant risk of causing harm and, therefore, their use is allowed only under specific safety controls. This part relates to current EU legislation on product safety and risk management. This includes, for example, credit rating, AI systems related to public infrastructures, medical devices,...

Specifically, high-risk AI systems are scored using a scale that classifies the risk associated with the product itself.

The Regulation also includes a series of controls that must be performed by the provider in high-risk AI systems. Among other controls are the following: transparency, security, accountability, risk management, verification and oversight.

Penalty regime

The draft provides for penalties in case of non-compliance, which are reminiscent of the penalties for violation of the General Data Protection Regulation. It provides for the establishment of a European Committee on Artificial Intelligence, which will provide advice and assistance to the commission, thus contributing to effective cooperation between national supervisory authorities and the commission, as well as coordinating and contributing to the guidance and analysis of emerging concerns provided by the commission and national supervisory or other competent authorities, and assisting national supervisory authorities and the commission in ensuring consistent application of the rules. 

Member states are required to designate national competent authorities and a national supervisory authority tasked with providing guidance and advice for the implementation of the regulation. These will be supervised by the European Committee on Artificial Intelligence, which, in addition to the supervision and enforcement of the regulation, will be assigned the power to impose fines of up to EUR 30 million or 6 % of the company's turnover.

Entry into force and application

The application of this regulation will be from April 2023, two years after its entry into force. Although it is still almost a year away, it is still subject to change. For this reason, companies likely to be affected are advised to take it into account as soon as possible, in view of the threats of sanctions and the spread of AI to meet the legal and technical challenges offered by this new legal framework.

In December last year, 113 civil society organizations launched a collective statement calling for fundamental rights to be brought to the forefront. In this declaration, they outlined recommendations to guide both the Parliament and the Council to modify its proposal.

Share:
What is artificial intelligence?

Before explaining what artificial intelligence is, we would like to start with a sentence from the book Age of intelligent machines (1992), by Raymond Ku [...]

Read More »
The rise of artificial intelligence in business

The rise of Artificial Intelligence (AI) in business is very topical. Its use is spreading and is changing, even, the models [...]

Read More »
Differences between a traditional CRM and an intelligent CRM

Today we are going to explain the differences between a traditional CRM (Customer Relationship Management) and an intelligent CRM by applying technology that [...]

Read More »
Artificial intelligence against delinquency and non-payments in companies

The current scenario we are experiencing in Spain with the COVID-19 health crisis has led to many companies having to carry out ER [...]

Read More »
See more entries
© Gamco 2021, All Rights Reserved - Legal notice - Privacy - Cookies