Deciphering the AI Act
Last Updated on December 21, 2023 by Editorial Team
Author(s): Tea MustaΔ
Originally published on Towards AI.
Your AI Act action plan
On politics, negotiations and compromises
The AI Act [1] was formed in a long and painful process, perfectly demonstrating the influence and importance of politics in the European legislative process. But also the problems that the lack of expertise, as well as perspective, can cause. So, for instance, the most prominent and widely discussed generative AI models were not even mentioned in the first draft of the Act, which was obviously published before ChatGPT went public and disrupted the reality we thought we lived in. The text with regards to the models was then radically changed and renegotiated multiple times, often under the influence of newly published scientific papers or the reported progress of European generative AI startups. Only to finally be one of the biggest make-it-or-break-it points in the final negotiations.
As a result of all this, we ended up with the confusing conundrum of somewhat related, occasionally logically connected, and hopefully still implementable provisions. Nonetheless, the Act has been politically agreed upon and we will (hopefully) soon see the final version of the monstrum. (Although, probably not in the coming month or two.) And while weβre waiting on Frankenstein to scare us into compliance with its astronomical fines, there are certain steps we can take to set up our defenses.
The action plan
There is no place to start like the beginning. So letβs first establish that if you are developing or deploying anything somewhat automatized you most likely fall under the scope of the Act. Now that we got that out of the way, we can also establish that this is not a threat nor the end of the world. It is, however, the reality that every single person or company developing or deploying an AI system (or wanting to do any of the two) has to confront themselves. Hopefully, the following action plan can help you do that.
1. Start planning today, conforming with the AI Act obligations will take time.
Familiarize yourself with the text of the Act. The final political compromise text is still not available, nor will it be available in the next couple of months. However, considering that certain obligations enter into force already 6 or 12 months after the Act is published. Also, the fact that the Act comes at a whopping 150 pages, looking the other way and waiting for things to get real is a very poor strategy.
The obligations included in the risk management system (Article 9) or quality management system (Article 17) provisions for instance, include not only developing a concept for addressing the risks of the system but also implementing the said management systems as well as designing processes for addressing novel risks as they arise. These are no trivial matters.
On the other hand, other obligations, such as human oversight (Article 14), can only be implemented by building safety features and guardrails into the products. Conversely, trying to retroactively implement features allowing human observers to intervene in any part of the automatized process will, in most cases, be, if not impossible, then at least associated with serious difficulties, delays in development and/or deployment, and additional financial costs.
Finally, ask yourselves do you have the resources and capacities to tackle the challenges of the Act. If not, make sure to find some.
2. Know your role
There are 5 roles in the AI Act. Providers, deployers, importers, distributors, and manufacturers. In most cases, however, only two of these will be relevant. These are the providers and the deployers. And falling into one or the other category makes an enormous difference as to the number and seriousness of the obligations one has to comply with.
Many of the obligations of the Act (many listed in Article 16), will affect primarily the providers of AI systems. This means the companies are developing AI systems and placing them on the market, regardless of whether the systems are developed or only deployed on the territory of the European Union. However, any person meddling with the intended use purpose of an AI system, as communicated by the provider, or making significant changes to the system, will also fall into this category and gain the privilege of trying to fulfill all the Actβs obligations (Article 28). And there are a couple, so avoiding stepping into the providerβs shoes is probably the most important hack for achieving compliance with the Act.
The second relevant role, that of a person or company deploying an AI system within the scope of their business activities, is associated with significantly less legal burden (Article 29). The said burden mostly consists of due diligence obligations. There, knowing the system you are deploying and its provider is a good place to start. Followed by the advice of using the system according to the providerβs documentation and for the purposes intended by the provider. Finally, depending on the system and the situation, the deployer might also have to comply with the obligations of implementing effective human oversight, enabling automated logging or simply assisting the providers in complying with their obligations. All of these are relevant but significantly less complex than designing the system that enables them in the first place.
3. Know your system
As already mentioned, almost all automatized systems will also be considered AI systems and fall within the scope of the Act. This is something that needs to be taken seriously and influence all stages of the development and deployment process. Consequentially, you can already start thinking about the system in light of the AI act straight from the beginning. This includes primarily knowing whether your system falls in the high-risk category or not. To make this assessment, you can follow three basic steps.
a. Is the system Iβm using a product regulated by special safety regulations, or can it be integrated as an essential part of such a product?
b. Is my system listed in the (eternally long) list in Annex III?
c. If yes, are the risks associated with my specific product actually that high?
The last step of this basic assessment is by far the most complicated one and includes performing a risk assessment. However, this assessment is also the only potential gateway out of the extensive obligations and, even if the escape attempt remains unsuccessful, the assessment can still help you in the next step of this action plan.
4. Know your risks and address them as they arise
One of the most complex and extensive obligations of the Act is the implementation of the risk management system. To even be able to begin designing this system, you first have to be very well aware of the risks caused by your system. This can be, to a certain extent, supported by the risk assessment from the previous step, but in order to design and develop a risk management system, any previously conducted assessment would have to be greatly upgraded.
To perform a successful and effective assessment, you should answer the following questions:
a. What is the intended purpose of the system, and is its use associated with any serious and obvious risks?
b. Which goals do I aim to achieve by implementing the system? Increasing efficiency? Automating certain parts of the process? Implementing additional checks or prechecks in my processes? This can then later on help you assess and justify the implementation of the system.
c. Which categories of risk are associated with the intended use of the system? Systemic risks? Environmental risks? Privacy risks?
d. Who do the risks affect? The whole society? My employees? Users of my product/service?
e. What can I do to minimize each of these risks, and how often do I need to check the validity of the implemented measures to assure these are still adequate for addressing the risks?
Very important in this context is to address all potential risks and to analyze them both separately as well as in connection with each other. Always, however, while keeping in mind your specific AI use purpose.
5. Document everything
The fines under the Act are going to be hefty (yes, heftier than under the GDPR) and compliance and accountability are the only preventive measures against them (Article 71). Everything you do should, therefore, be documented even if this is not (yet) a legal obligation. By doing this, you can prove that you both considered the risks and potential issues from the beginning and thought about how these could be addressed. Analyzing risks, designing and documenting control processes and reporting and responsibility chains, as well as publishing guidelines for employees or users can only help your cause. These are just some of the best practice steps that will help your case should something happen or should someone come knocking at the door and check your system once the Act is in force.
6. Donβt forget the other stuff
Finally, yes, the AI Act is big and scary and coming. However, there are many other regulations applicable to AI systems. Most of them address more specific issues. From the data used in the training process to the outputs of the systems and the data created on the back end. Some of these include the GDPR, Digital Services Act, Digital Markets Act, and Data Act. And the number of these legal instruments is steadily increasing. Still, the obligations contained in any of the mentioned (or unmentioned) acts are also here to stay and have to be addressed. However, as there is still no guidance from the EU regarding how these interact with the AI Act, this unfortunately means that that is up to you to figure out.
Final thoughts
After closely following the developments in the last couple of months, my personal conclusion regarding the AI Act is that it is very much like French grammar. For every rule, there are at least 7 exceptions. So much so, that you start questioning why there was a rule in the first place. However, that can be no excuse for simply giving up, but rather it should be a motivation to confront the Act and its many complexities as soon as possible. Because as it is too late to start learning French once you land in Paris, it will also be too late to start getting acquainted with the AI Act and its many intricacies once it is already in force.
[1] Last published changes to the original text from the 14th of June 2023 are available at: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI