The European Union is recently working on a new legal act regulating artificial intelligence. Experts predict that the adoption of this regulation will cause a similar revolution to the one triggered by the implementation of GDPR to the business practice. Is this actually going to happen? Is every entrepreneur going to be made to adapt their business to the Artificial Intelligence Act? What are the obligations imposed by AIA on enterprises? If you are interested in answers to the above questions, we invite you to read the following article.
The concept of Artificial Intelligence
At the beginning of AIA analysis, it should be explained how artificial intelligence is defined in the proposed regulation. In a published project, an “artificial intelligence system” is considered to be software developed using one or more of the techniques and approaches listed below that may produce results such as content, predictions, recommendations or decisions that affect the surrounding.
The above-mentioned techniques and approaches are as follows:
- Machine learning mechanisms, including supervised learning, unsupervised machine learning, and reinforcement learning, using a wide variety of methods, including deep learning;
- Logic and knowledge-based methods, including knowledge representation, inductive (logic) programming, knowledge bases, inferential and deductive engines, (symbolic) reasoning, and expert systems;
- Statistical approaches, Bayesian estimation, search and optimization methods.
As can be seen from the above enumeration, the definition of the artificial intelligence system in the proposed regulation is extremely broad. Therefore, everyone using software based on the above-mentioned mechanisms should consider whether new legal obligations will be also imposed on him. Obviously, obligations won’t be the same for all such systems. The provisions of the regulation distinguish several types of artificial intelligence.
AIA distinguishes systems that create:
- unacceptable risk
- high risk
- limited risk
- low or minimal risk
In this article, we will focus on high-risk systems, as their use indicates the greatest number of new obligations on the software owner.
High-risk Artificial Intelligence
The proposal indicates that high-risk artificial intelligence systems are those that are used to:
- identification and biometric categorization of natural persons if they are intended to remote biometric identification of natural persons “in real time” and “post factum”,
- education and vocational training if they impact the availability of education and vocational training to natural persons, as well as artificial intelligence systems used to assess students and participants of the exams required for admission to educational institutions;
- employment, employee management, and access to self-employment, if they are intended for recruitment purposes, in particular informing about vacancies, selecting or filtering job applications, evaluating candidates or testing them, making decisions on promotion and termination of employment, assigning tasks and monitoring and assessing the performance and behavior of employees.
Therefore, if the entrepreneur uses any tool, program or application operating on the basis of the above-described systems in his workplace, he will have to meet a number of new obligations and, as a result, adapt his company to the new legal requirements.
Responsibilities of entities using high-risk Artificial Intelligence
The high-risk artificial intelligence systems must meet legal requirements regarding, among others:
- maintaining a risk management system;
- data and data management;
- technical documentation;
- recording of events during the operation of AI systems;
- transparency and provision of information to users;
- human supervision;
- Reliability, Accuracy, and Cybersecurity.
The above requirements will be a challenge for all companies using AI systems, which may be classified as high-risk systems under the new regulations. Severe penalties are planned to be introduced for non-compliance with the upcoming regulation. A company may be fined up to EUR 30,000,000 or up to 6% of its total annual worldwide turnover from the previous financial year. Each European Union Member State is going to appoint a supervisory authority to ensure AIA enforcement.
Should you make changes now?
The proposal was published in April 2021, however, legislation procedures in the European Parliament started in January 2022. So when should entrepreneurs expect the enforcement of the obligations described in the article?
The new regulation will enter into force on the twentieth day following its publication in the Official Journal of the European Union. However, it will be applicable 24 months after that date. Since the act has not been passed yet, it means that it will have a real impact on the market in more than 2 years.
Despite the distant date of application, the new regulation shouldn’t be ignored now. At the time of AIA implementation, especially transitional provisions will have a great impact on the market. On the basis of Article 83 of AIA, the new regulation applies only to high-risk artificial intelligence systems that (1) will be implemented after the date of application of the regulation (i.e. in more than 2 years) or (2) have been implemented earlier, but will significantly change its purpose or structure.
In the case of a high-risk artificial intelligence system, it may turn out that its development and implementation within 2 years from the entry into force of the regulation will not impose any new obligations on the entrepreneur. Consequently, it will be an easier and cheaper solution than meeting a number of new requirements. Consequently, the starting point for using the system is a crucial indicator for the application of the provisions of the new regulation.
However, the above-mentioned provision will not apply in case of systems of unacceptable risk. Such a system is going to become illegal after the regulation enters into force, even though it gets implemented before the adoption of a new legal act. As a result, the entrepreneur will be made to delete such software, which will cause significant financial losses.
The above means that all entities planning, developing, or just implementing any software that may contain elements of artificial intelligence systems should already consider the qualification of their systems in accordance with AIA rules. This qualification will determine the future responsibilities of the software owner.
About the author
Law4Tech is an attempt to look at the problems of new technology law in a completely new way. Hub, bringing together lawyers working in various fields like cyber security, intellectual property law, European law, or new technology law. Young people who perfectly complement each other, in terms of knowledge, skills, as well as temperament, and working style. Law4Tech combines a practical as well as scientific perspective, with a pro-social attitude, and as such creates a space friendly to developing innovative ideas.