Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

EU AI Act: A Promising Step or a Risky Gamble for the Future of AI?
Latest   Machine Learning

EU AI Act: A Promising Step or a Risky Gamble for the Future of AI?

Last Updated on July 24, 2023 by Editorial Team

Author(s): Aditya Anil

Originally published on Towards AI.

EU’s AI Act seemingly has a big loophole. And this could mean mass-evading of AI regulations by Tech Giants like OpenAI

Image: Bing Image Creator + Canva

I. The First Leap at AI Regulation is Here

The EU finished drafting its AI Act this year on June 14th.

And this is an initial leap towards a safe and better AI. As you may know, the EU’s AI Act is the first international regulation law made on β€˜Artificial Intelligence. This Act lays down rules and regulations for companies (called providers) to build ethical AI systems. European countries, as well as major AI companies like OpenAI, have presented their own suggestions to the parliament.

The Artificial Intelligence Act (AIA for short) urges companies to follow certain requirements while working with AI systems. This includes doing KYC of the providers, disclosing the architecture of the system, details on data privacy, et Cetra. This act was the culmination of three years of intense policymaking, though the joint efforts by the EU in this area date back to 2017. However, it was only recently that the need for AI regulations and ethics became evident.

People realised the power of AI wizardry β€” with tools like ChatGPT and Dalle β€” that worked terrifically fast, making everyone artists and poets overnight.

People marvelled at the capacity of AI tools. And later on, they became concerned about their safety. It became evident that these machines could generate convincing misinformation. Plagiarism and copyright issues by these wizards are an endless debate in itself, on the creative front in particular.

While on the one side, people sold courses on AI prompts and β€˜the top X AI tools you need to know before the world ends…’ kinda posts, the other side was concerned over fast-paced AI. These concerns were predominantly raised by AI experts. And they worried about the safety from these wizards running on algorithms.

Source: https://www.metaculus.com/questions/8787/passing-of-the-ai-act/

When will the EU pass the AI Act?

Your submission is now a Draft. Once it's ready, please submit your draft for review by our team of Community…

www.metaculus.com

As I am writing this, the legislation of AIA is still under process. Its enforcement would be done likely by 2023 or 2024. On top of it, there will be a grace period of potentially 24–36 months before the Act will come into force. Assuming the AI Act gets drafted by the end of the year, it still won’t be in full action until the end of the next two-three years.

Now that the Act is still in making, this is a great time to analyse it and see if this act can indeed tame these AI wizards, or if there are some shady under-the-table loopholes in them.

II. Wizards and their Spell

Companies betting on AI are leading the future, and the fortune at the same time. The global generative AI market is worth over $13 billion. And whenever you try to regulate an industry of that magnitude β€” which would surpass $22 Billion by 2025 β€” the regulations become daunting. Not to mention, there’s always a loophole in a project of this size.

And such is the case with AIA.

AIA is important, and in the coming years, other unions would try to build their own laws on top of this framework β€” making them consistent globally. Companies like OpenAI and Anthropic, the big tech giants on the AI front, both courted a universal regulations body that makes laws on AI.

While the AI Act kept social safety in mind, it did create regulations to support companies under the regulations β€” making them flourish as well, without increasing the competition. This is because the AI Act requires companies to share the information of their system with other providers.

Accountability is one major issue while using such algorithmic beasts. This created a gap β€” as in who will be accountable if these wizards go wild.

The AIA addressed this gap by clarifying the roles and responsibilities of AI providers.

With the Magnitude of AI startups rising up this year, the question of computational resources, as well as the amount of training data is crucial to address. AI act is looking to put a cap on the training of new models, ensuring the safe development of these AI systems. Source: Servilla et al. (2022)

With the Magnitude of AI startups rising up this year, the question of computational resources, as well as the amount of training data is crucial to address. AI act is looking to put a cap on the training of new models, ensuring the safe development of these AI systems.

AIA entertains the interests of both the providers (AI Companies) and the consumers (the users). Innovations in AI must go on, while tackling the associated risks. Part of making AI safe is to develop them in such a way that it becomes safer. You need to train a beast in order to make it socialise.

AIA made sure that regulation does not choke the big corporations, or put a dead end in the way of future technological advancement.

Of course, the AI draft can’t make every party satisfied. This was the case, obviously, for the big companies.

The big corporations allege that the EU is over-regulating.

Recently, a few executives of big tech corps, over 150 to put a number, signed an open letter to the EU alleging that AIA is overregulating AI. This letter included the signatures of major tech leaders β€” like the CEOs of Renault and Siemens, the executive director of Heineken, the chief A.I. scientist of Meta, et cetra.

Moreover, the mighty OpenAI CEO Sam Altmann, also expressed his solicitude by stating he would have to leave the EU if the rules are too strict. Under AIA, both ChatGPT and its large language model GPT-4 could be labelled as β€˜high-risk’. I will show you what β€˜high-risk’ means in a bit.

Moreover, according to Time, the OpenAI CEO said β€œIf we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.

In another Times article, it was reported that OpenAI allegedly tried to lobby the EU to draft in its favour. And it seems their efforts paid off in some sense. On the other side, a Reuters post reported that EU lawmakers were indeed making tougher AI rules.

Since the AI act in itself is comprehensive, I referred to the analysis made by a non-profit organisation called Future Life Institute (FLI). If you remember, this was the same organisation that issued an open letter calling for a 6-month pause in the development of AI β€” which was signed by 1,125 tech leaders and AI experts.

The other day, I found FLI had their weekly newsletter of EU’s AI Act on Substack maintained by Risto Uuk where he posts highlights, major events and FLI analysis of the EU act.

FLI has done a pretty good job in analysing AIA. FLI has done research in the field of AI ethics itself. FLI developed the Asilomar AI principle β€” which is a set of 23 guidelines for the safe development of Artificial Intelligence.

Hello readers! Hope you enjoying this article. This article is part of my Creative Block newsletter β€” a weekly newsletter on Tech and AI.

If you’d like to read more content like this, head on to Creative Block

While reading through one of their analyses, I saw that the AI Act has some key changes to make. It has loopholes, quite serious ones actually, that could lead bad players to misuse these unregulated AI beasts.

So what are these loopholes?

III. High-Risk Systems and evading as exceptions

The AIA is set up on a risk-based approach.

Under this approach, some AI uses β€” that are deemed harmful β€” are prohibited; some uses need to follow strict rules; and some non-harmful AI uses are not caught by the AIA.

The second one, i.e. those systems that are subjected to the strict requirement, falls under the category of β€˜high-risk’ systems.

How the AI risk is categorised according to EU. Source: https://www.ey.com/en_ch/ai/eu-draft-regulation-on-artificial-intelligence

According to the EU, β€˜high-risk’ systems are those systems that will negatively affect you and your fundamental rights. EU classifies high-risk systems broadly into two categories :

1) AI products that fall under the EU’s product safety legislation, which are the products that the EU banned or recalled, or

2) AI systems that fall into the following eight specific areas:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

If an AI system falls under any of the above given bulleted points β€” they are labelled high risk. And yes, if you were wondering, ChatGPT (especially the GPT4 version) seems to fall under all of them.

And this has caused OpenAI to fret over the AI Act. The alleged lobbying efforts by OpenAI could be one of the reasons for it. Evidently, many of its apps β€” majorly ChatGPT and their model GPT4 β€” would be classified under high risk.

Let’s take this case and ask: Can ChatGPT and GPT4 be considered high risks? And before you answer that, let me frame the question the OpenAI way: Are these tools inherently high risk?

The answer seems clear from what we observed in the past few months. AI tools like ChatGPT are high-risk.

There have been occasions where these tools have posed significant risks. Misinformation by these tools is still a thing, they hallucinate and spit convincing lies, Leaders apparently ending war thanks to deep fakes, churning out your data on the internet, countless intellectual property rights saga … and many more. The list goes on and on.

I tried to be fair and showed the broad scenarios of these risks. However, I am not blind to the fact that these machines have proved to be straight-up harmful at an individual level.

Some bad things happened because of AI. For example, one person killed themselves and I wrote about it before. Another example is when some lawyers got in trouble for using ChatGPT β€” they did not know that the chatbot made up fake sources. These are some examples of the dark side of AI.

You won’t want to wake up with this on your News Feed, would you? (source: Wion on msn.com)

As you can see, I can keep on going on. By the time I hit publish on this post, there would creep up another incident where people faced trouble. Someone out there in the vast world ends up a victim of these spells by the AI wizard. This shows the bounds of harm that these algorithmic machines have. These bounds are virtually boundless.

FLI noticed that systems like ChatGPT can avoid the label of high risk, thanks to the Article 4c of the AIA.

IV. Article 4c β€” The un4seen clause of AIA

Article 4c is the exception of Article 4b of the AIA.

Article 4b lays out rules for the development of the AI systems, and the providers of such systems. According to this article, if you intend to create a high-risk system, you have to provide details of your system to β€˜other’ providers like you.

This is frowned upon by the AI companies as they would need to disclose the information about their own system. This also would lead to heavy competition in the market.

Classification of High Risk. Source: https://ai-regulation.com/guidance-on-high-risk-ai-systems-under-eu-ai-act/

The exception to this rule creeps up in the definition in Article 4c. According to this, you (as a provider) would NOT have to give out information about your system IF your system is not classified as β€˜high-risk’. This could be done by stating that you removed all the high-risk elements of your system.

This act is a provision that allows providers to evade their obligations. If they can convince their systems are β€˜not high risk’ β€” and that they won’t be misused β€” they won’t have to disclose their AI system.

Disclosure on how an AI system is made is vital, to ensure that the providers are developing it ethically, and safely.

This becomes the backdoor of evading responsibilities for big players in the AI market. AI experts, and FLI, want this article to be removed, as this could allow large tech giants to evade their obligations and fair play.

This could be a major loophole, but I won’t consider this as a big flaw in the AIA. What is needed right now is a better revision of this article, or better still discard this exception overall.

The good news is, AIA is still under draft. FLI noted this loophole and covered it in its analysis.

One notable aspect of the AIA, despite the backlash from big corporations concerned about over-regulation, is that the act aims to ensure that European organisations can remain competitive. Especially with other major players in countries like the United States and China.

For instance, OpenAI and Anthropic are based in the USA, while Baidu Ernie is based in China. All of them are strong players in the AI race (that later one not so much). So, in terms of competition, European companies have a long way to go to be on par with the competition.

V. The Fast-paced future, with a pinch of reality

Going by the trend, with a pinch of optimism, AI would definitely shape the tech of the future. AI is already used in multiple fields in some form or the other. Whether it is using AI to solve protein structures, using AI simulations in various parts of engineering, modernising education using AI, or even using AI to brush your teeth, or for other bizarre use cases, AI would become the new normal β€” just like smartphones.

With so much dependence under one umbrella, safety and ethics is also a major feature that should be addressed as soon as possible. Could an AI-powered lifestyle be the new normal? Probably. That’s debatable.

It is important that big corporations address these issues and play their roles sincerely with combined efforts. And yes, the big tech giants hold accountability because they are the ones steering the world towards the new AI era.

Addressing the loopholes, and changing them early in the draft phase is better; rather than wasting time amending them later.

However, not all tech leaders exhibit arrogance, nor is the concept of AI ethics purely imaginary. Of course, it is not. Almost all big AI companies have their own safety charter. But the AI industry is largely unregulated as of now. AI models carry the inherent risk that one could not foresee. The regulatory efforts came in too late, as AI flourished more than ever. But it’s better late than ever β€” and the EU, thankfully, took the first major step to keep the measure,

While it may be a small step for a union of 28 countries, it represents a significant leap for the future of individuals online in this new AI era.

Are you interested in keeping up with the latest advancements in technology and artificial intelligence?

Then you won’t want to miss out on my free weekly newsletter on substack, where I share insights, news, and analysis on all things related to tech, science and AI.

Creative Block U+007C Aditya Anil U+007C Substack

The weekly newsletter about AI, Technology and Science that matters to you. Click to read Creative Block, by Aditya…

creativeblock.substack.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓