Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

EU Accelerates AI Regulation
Latest

EU Accelerates AI Regulation

Last Updated on October 3, 2022 by Editorial Team

Author(s): Salvatore Raieli

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

A new proposed bill could allow consumers to sue AI companies, but its only part of a larger regulation

EU AI regulation
image from Aron Visuals at unsplash.com

The European Union (EU) is preparing new regulations for artificial intelligence. A bill was introduced last week that would allow AI companies to be sued for damages. The new bill could be passed in a couple of years, but on the one hand, companies say it could reduce innovation while activists say it is not enough.

Background

Since the so-called “AI winter,” application development and research on artificial intelligence have accelerated over the past two decades. On the one hand, there has been a continuous improvement of algorithms, and on the other hand, more and more data are available on the Internet.

In recent years, AI algorithms have become increasingly sophisticated and could revolutionize several fields (from medicine to astronomy, from biology automotive industry). On the other hand, several companies have also tested algorithms for applications that could have a harmful impact on society: for example, screening applicants for jobs or a mortgage, surveillance, probation, and so on.

As the number of parameters in a model increases, it is increasingly complex to be able to understand how it works, which is why researchers talk about “black boxes.” In recent years, several studies have seen how AI can potentially include biases toward minorities and gender. For example, facial recognition algorithms have been shown to falsely identify Black and Asian faces 10 to 100 times more often than they did white faces. In 2020, a man was mistakenly arrested because an algorithm identified him as the thief of some luxury watches in a store.

It is not facial recognition algorithms that incorporate potential bias. Several language models have also been shown to have incorporated different biases in the case of translations and other applications. In addition, tools and algorithms used to approve or reject loans are often more inaccurate with minors and tend to reject their applications (banks’ predictive software tends to favor whites when they have to predict who will repay a debt).

In addition, there have been cases where algorithms have helped spread misinformation and influence voting. For example, algorithms can favor the spread of fake news.

In recent years, institutions have begun to discuss potential regulations for the use of artificial intelligence and its applications. Discussions in the European Union began in 2017, and now drafts of two potential bills have been published that could be passed in the coming years.

The new EU directives

EU AI regulation
image from Scott Graham at unsplash.com

This new bill should be framed within a broader plan by European and American institutions to regulate the use of artificial intelligence.

In fact, this proposed law AI Liability Directive, is associated with a new directive under study: the EU’s AI Act. This is an ambitious EU plan to prevent harmful uses of artificial intelligence. The first draft explicitly talks about avoiding unacceptable uses of artificial intelligence. This proposition aims to reduce the risks of AI but also to increase the transparency of algorithms.

The EU’s AI Act would limit the use of algorithms by police forces but not only. In fact, countries such as Germany are pushing to ban the use of facial recognition, algorithms that predict the potential for a user to be delinquent, profiling crime-prone areas, and so on. In addition, users would be notified when they encounter AI applications that read emotions, biometric data, and deep fakes.

The AI Liability Directive adds the possibility that companies that develop and market these algorithms could be sued by those who believe they have been discriminated against. This law, if it goes into effect, will require companies, before marketing AI applications, to extensively vet the risks associated with their algorithms.

The AI Liability Directive provides, for example, that applicants who can prove that their resume is discriminated against can request information and access to the algorithm and then them from a court.

Companies claim that this new bill will negatively impact innovation, fearing that it will block the development of many possible new software. In contrast, consumer groups have not greeted it with much enthusiasm. In fact, it is up to consumers to be able to prove that they have been discriminated against by the algorithm. Considering the complexity of some of these algorithms, some associations fear that proving the fault of the algorithms is not possible. In addition, several associations would also like the indirect harms of AI technologies to be taken into account.

Parting thoughts

The European Union is studying various regulations for artificial intelligence, and they should not be taken lightly by companies. In fact, as GDPR has shown, violations of EU regulations lead to hefty penalties (Amazon was forced to pay $775 million for violating GDPR in 2021 while Google was fined $4.5 billion in 2018 for violating anti-trust laws). The new rules under consideration include penalties of up to 6 percent of total worldwide annual revenue.

In addition, in the field of regulations, there is talk of the EU effect. Often, regulations produced by the EU are inspirational for other regions and other countries. Or on the other hand, once companies are forced to develop products for the European market, they raise standards in other countries.

As might be expected, it is not only in the EU that potential new regulations are being considered. At both local, state, and federal levels, various regulations for artificial intelligence and its applications are under consideration.

However, the proposed laws are still under consideration; on the other hand, when discussions about AI and liability started at the EU level in 2017, the world was different. After a pandemic and recession risk, many lawmakers would not want new regulations to be a burden on innovation and development. Some, like the Ada Lovelace institute, believe that the new regulations will be less restrictive than what was assumed at the beginning of the discussions.

If you have found it interesting:

You can look for my other articles, you can also subscribe to get notified when I publish articles, and you can also connect or reach me on LinkedIn. Thanks for your support!

Here is the link to my GitHub repository, where I am planning to collect code and many resources related to machine learning, artificial intelligence, and more.

GitHub – SalvatoreRa/tutorial: Tutorials on machine learning, artificial intelligence, data science with math explanation and reusable code (in python and R)

Or feel free to check out some of my other articles on Medium:


EU Accelerates AI Regulation was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓