Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

An Empowering Process Against Ambitious AI Regulation: The 3-Step Solution from Oxford Research
Artificial Intelligence   Data Science   Latest   Machine Learning

An Empowering Process Against Ambitious AI Regulation: The 3-Step Solution from Oxford Research

Last Updated on November 6, 2023 by Editorial Team

Author(s): The Regulatory Technologist

Originally published on Towards AI.

If you are an Accountable Manager, Product Owner, Project Manager, or Data Scientist involved in an AI project, Oxford Research has identified YOU as a key player in AI regulation.

Get a head start on what the EU AI Act means for you and what Oxford Researcher’s are suggesting to comply.

Hit the clap button if you enjoyed (Tip: You can hold it down to provide up to 50 claps!)

AI: Innovation vs. Regulation

Artificial Intelligence regulation is on the minds of governments across the globe. Whilst policy on this hot topic is still in it’s infancy, there is a clear trade-off decision for policy makers.

Innovation vs Regulation

It’s a tale as old as time. Do you unleash the benefits and dangers of AI on an economy and hope the free market figures it out along the way? Or do you attempt to restrict and avoid the major perils of AI at the expense of reduced business investment?

As discussed in my last post, the European Union has made their position clear

Artificial Intelligence Act — EU Law with a Global Impact

The breadth of the European Union’s proposed Artificial Intelligence Act is staggering. It has the potential to impact…

medium.com

The EU’s AI Act is a hard stance on AI use within the EU Market. Anyone within or outside EU, wishing to access the EU with an AI product/system, must adhere to the AI Act. This is the AI equivalent of GDPR for data protection.

Its definition of AI is also broad, purposefully so. The draft law’s preamble focuses on the usage of AI, rather than it’s technical makeup when scoping. The Act’s definition of AI covers:

Machine-learning approaches: Supervised, unsupervised and reinforcement learning, deep learning;

Logic- or knowledge-based approaches: Knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

Statistical approaches: Bayesian estimation, search and optimization methods.

A core part of their proposed law is a Conformity Assessment of AI systems deemed to be High Risk. This uses the core ideas that AI systems must make effort to mitigate risk of failures and resolve in good faith any perceived lack of trust.

More explicitly:

Careful monitoring of the design, development, and use of AI technologies and assessment of the ethical, legal, and social implications of these technologies

(Credit — CapAI)

The above quote is from a publication by Oxford researchers, on their AI conformity assessment process capAI.

Get used to this, you will probably be hearing about it for the next 2–3 years.

Credit: capAI

Note: I reference capAI’s text directly below, if you would like to read it for yourself, more the link is here.

capAI — Oxford’s Conformity Assessment

So what is capAI? It is a procedure for conducting conformity assessments of AI systems in line with the EU’s Artificial Intelligence Act. It’s practical guidance for how high-level ethics can be translated into procedures that help shape the design, implementation and uses of AI.

In the researcher’s own words, it broadly has two use cases in mind.

Providers of “High Risk AI systems” to demonstrate compliance with the EU’s AI Act.

Providers of “Low Risk AI systems” to operationalize their commitments to voluntary codes of conduct

(Credit — CapAI)

The conformity assessment itself has three components. IRP, SDS, ESC.

  1. Internal Review Protocol (IRP)
  2. Summary Data Sheet (SDS)
  3. External Scorecard (ESC)

(I can’t help but think of CTRL, ALT, DEL when seeing these together)

Researchers claim that conjointly, these provide a comprehensive audit demonstrating compliance with the AI Act. The Internal Review Process seems to be the most comprehensive ask and highlights a plethora of points to consider and, importantly, document.

Let’s look at each component:

Internal Review Protocol (IRP)

This is an internal governance model for quality assurance and risk management.

  • Internal documentation. Not intended to be public-facing
  • Supports drafting of technical documentation
  • It focuses on a 5 stages of “The AI lifecycle” to help stakeholders govern their AI systems.
  • Design. Development. Evaluation. Operation. Retirement
Credit — capAI
  • Outlines the roles and responsibilities and considerations for 4 key Stakeholders within each stage of the AI lifecycle
  • Accountable Managers, Project Managers, Product Owners, and Data Scientists

As an example, Data Scientists are heavily involved in the Development and Evaluation stages. As a Data Scientist, have you thought about these? Could you evidence these? Are you in lock-step with the other stakeholders on your answers?

Development — Documentation of Model Development:

  • Has the organization documented AI performance in the training environment?
  • Has the setting of hyperparameters been documented and justified?

Evaluation — Model Testing:

  • Has the organization documented the AI performance in the testing environment?
  • Has Testing covered Robustness and Discrimination risks? Are you aware of the factors to test against? Were these included in the risks identified by your project manager?
  • Has the model been tested for performance on extreme values and protected attributes?
  • Patterns of failure have been identified (FMEA — Failure Mode and Effectiveness Analysis), e.g., error curves, overfitting analysis, and exploration of incorrect predictions.
  • Key failure modes have been addressed?

Summary Data Sheet (SDS)

One obligation of the AI Act requires stakeholders to publicly register their AI systems. This provides guidance.

  • Externally facing document
  • Summary of the AI systems purpose, function and performance

External Scorecard (ESC)

  • Externally facing document
  • Summarizes key elements of IRP for public reference
  • Purpose, Values, Data, and Governance

Closing Thoughts

There is a lot to unpack here. At first, it seems daunting. Kudos to the capAI team as it is a significant step forward in having some guidance against what is a pretty broad law.

As a reminder, the law itself will probably not be finalized until 2025, with a 2-year window to comply. Research has not waited though, and the folks behind this work have certainly done their homework.

It will take collaboration across units in an organization to prepare for this law when it comes into effect. The bigger the organization, the tougher to be consistent across stakeholders. For smaller companies, the burden of compliance and documentation overhead prior to launching on the EU Market (for high-risk systems) is a challenge in resource management and subject matter expertise.

Thanks for reading! If you enjoyed this leave a clap and I will delve into the Data Science / Product Owner impacts further.

If you have concerns on how this might impact your business, or want to hear more, get in touch!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓